Program

  • Toward Operating System Support for Scalable Multithreaded Message Passing – Gerof & al. (slides)
  • A Memory Management System Optimized for BDMPI’s Memory and Execution Model – Iverso & al. (slides)
  • An MPI Halo-Cell Implementation for Zero-Copy Abstraction – Besnard & al. (slides)
  • DAME: A Runtime-Compiled Engine for Derived Datatypes – Gropp & al. (slides)
  • Efficient MPI Datatype Reconstruction for Vector and Index Types – Träff & al. (slides)
  • MPI Advisor: a Minimal Overhead MPI Performance Tuning Tool – Gallardo & al. (slides)
  • MPI-focused Tracing with OTFX: An MPI-aware In-memory Event Tracing Extension to the Open Trace Format 2 – Wagner & al. (slides)
  • GPU-Aware Design, Implementation, and Evaluation of Non-blocking Collective Benchmarks – Awan & al. (slides)
  • Isomorphic, Sparse MPI-like Collective Communication Operations for Parallel Stencil Computations – Träff & al. (slides)
  • Plan B: Interruption of Ongoing MPI Operations to Support Failure Recovery – Bouteiller & al. (slides)
  • Scalable and Fault Tolerant Failure Detection and Consensus – Katti & al. (slides)
  • Sliding Substitution of Failed Nodes – Hori & al. (slides)
Sep
21
Mon
Tutorial 1: Performance analysis for High Performance Systems – François Trahay
Sep 21 @ 9:00 am – 12:00 pm

https://eurompi2015.bordeaux.inria.fr/program/tutorials/#tuto1

Location: Inria building, Ada Lovelace room (floor R+3).

Abstract: This tutorial presents the EZTrace framework for performance analysis. We present how to analyze the performance of MPI, OpenMP and hydrid applications with EZTrace. We first introduce the general performance analysis workflow as well as how to trace a simple application. Then, we present how the combinaison of EZTrace plugin allows users to analyize hybrid application or to gather hardware counters with PAPI. The last part of the tutorial will focus on how to write an EZTrace plugin in order to analyze precisely a particular application or library and to collect performance data.

Lunch (for morning tutorial attendees only)
Sep 21 @ 12:00 pm – 2:00 pm
Tutorial 2: Understanding and managing hardware affinities with hwloc – Brice Goglin
Sep 21 @ 2:00 pm – 5:00 pm

https://eurompi2015.bordeaux.inria.fr/program/tutorials/#tuto2
Location: Inria building, Ada Lovelace room (floor R+3).

Abstract: This tutorial will walk the audience across the complexity of modern computing servers. We will detail why those characteristics are important to HPC application developers and why they are difficult to manage manually. We will then introduce the Hardware Locality software (hwloc) which is developed to make developers’ life easier by abstracting hardware topologies in a portable way.

Tutorial 3: Insightful Automatic Performance Modeling – Alexandru Calotoiu, Torsten Hoefler, Martin Schulz, Felix Wolf
Sep 21 @ 2:00 pm – 5:00 pm

https://eurompi2015.bordeaux.inria.fr/program/tutorials/#tuto3
Location: Inria building, Ada Lovelace room (floor R+3).
Abstract: Many applications suffer from latent performance limitations that may cause them to consume too many resources under certain conditions. Examples include an unexpected growth of the execution time as the number of processes or the size of the input problem is increased. Solving this problem requires the ability to properly model the performance of a program to understand its optimization potential in different scenarios. In this tutorial, we will present a method to automatically generate such models for individual parts of a program from a small set of measurements. We will further introduce a tool that implements our method and teach how to use it in practice. The learning objective of this tutorial is to familiarize the attendees with the ideas behind our modeling approach and to enable them to repeat experiments at home.

Sep
22
Tue
Registration + Welcoming coffee
Sep 22 @ 8:30 am – 9:15 am

Location: Inria building.

Introduction
Sep 22 @ 9:15 am – 9:30 am

Location: Inria building.

Session 1 : Runtime and Programming Models
Sep 22 @ 9:30 am – 11:00 am

Location: Inria building.
Session chair: Emmanuel Jeannot

-“Toward Operating System Support for Scalable Multithreaded Message Passing”, Balazs Gerofi, Masamichi Takagi and Yutaka Ishikawa

-“A Memory Management System Optimized for BDMPI’s Memory and Execution Model”, Jeremy Iverson and George Karypis

-“An MPI Halo-Cell Implementation for Zero-Copy Abstraction”, Jean-Baptiste Besnard, Allen Malony, Sameer Shende, Marc Pérache, Patrick Carribault and Julien Jaeger

Coffee break + poster session
Sep 22 @ 11:00 am – 11:30 am

Location: Inria building.

Invited talk: Gabriel Staffelbach
Sep 22 @ 11:30 am – 12:30 pm

Location: Inria building.

Gabriel Staffelbach
Senior Researcher at Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS)

Combustion accounts for 80% of the worlds energy. Ever progressing trends in design and research have and continue to yield spectacular changes for turbines, cars, rocket propulsion etc..
CERFACS aims at introducing recent progress in the field of High Performance Computing (HPC) for combustion simulation into studies of Combustion. Using recent highlights in the combustion field this presentation will show the usage of leadership class computing for combustion applications on current architectures, methodologies and current developments towards exascale computing through hybrid parallelisation.

Lunch: Buffet
Sep 22 @ 12:30 pm – 2:00 pm

Location: Inria building, Ada Lovelace room.

Invited Talk: Jean-Pierre Panziera (Atos)
Sep 22 @ 2:00 pm – 3:00 pm

Location: Inria building

Jean-Pierre Panziera
Chief Technology Director for Extreme Computing at Atos

A new high performance network for HPC systems: Bull eXascale Interconnect (BXI)
The Exascale supercomputers will require technology breakthroughs in all aspects of HPC systems architecture. We are facing major challenges for the development of faster computing units, higher level parallelism, better storage capacity and bandwidth, overall energy efficiency, system resilience and reliability. The interconnection network is a central piece of the HPC system architecture today and it will become even more important as the system size grow. Specifically, Atos is developing the Bull eXascale Interconnect (BXI) a new generation of interconnect dedicated to HPC. This presentation will provide an overview of the BXI architecture and describe the main features. BXI is based on the Portals4 protocol, all communication primitives are offloaded to the hardware components thus allowing for independent progress of communication and computation. BXI is based on two ASICs, a Node Interface Controller and a 48 port switch. The BXI application environment optimize existing applications communications through native Portals4 interface for MPI or PGAS languages. BXI management software targets systems as large as 64k nodes and it provides mechanisms to overcome component failures without interrupting running applications. Finally, we’ll explain how BXI is integrated into the new Bull exascale platform.

Biography
Jean-Pierre Panziera is the Chief Technology Director for Extreme Computing at Atos. He started his career in 1982 developing new algorithms for seismic processing in the research department of the Elf-Aquitaine oil company. He then moved to the Silicon valley as an application engineer and took part in a couple of startup projects, including a parallel supercomputer for Evans & Sutherland in 1989. During the following 20 years, he worked for SGI successively as application engineer, leader of the HPC application group and Chief Engineer. In 2009, he joined Bull, now an Atos company, where he is responsible for the HPC developments. Jean-Pierre holds an engineer degree from Ecole Nationale Supérieure des Mines de Paris.

Session 2, part 1 – Datatypes
Sep 22 @ 3:00 pm – 4:00 pm

Location: Inria building
Session chair: Ron Brightwell

-“DAME: A Runtime-Compiled Engine for Derived Datatypes”,Tarun Prabhu and William Gropp ( Best paper )

-“Efficient MPI Datatype Reconstruction for Vector and Index Types”, Martin Kalany and Jesper Larsson Träff

Coffee break + poster session
Sep 22 @ 4:00 pm – 4:30 pm

Location: Inria building

Session 2, part 2 : Tools
Sep 22 @ 4:30 pm – 5:30 pm

Location: Inria building
Session chair: Ron Brightwell

-“MPI Advisor: a Minimal Overhead MPI Performance Tuning Tool”, Esthela Gallardo, Jerome Vienne, Leonardo Fialho, Patricia Teller and James Browne

-“MPI-focused Tracing with OTFX: An MPI-aware In-memory Event Tracing Extension to the Open Trace Format 2”, Michael Wagner, Jens Doleschal and Andreas Knüpfer

Social Event
Sep 22 @ 6:45 pm – 10:30 pm

6:45 pm: Gathering in front of the “Opéra National de Bordeaux”

6 :45 pm -> 7: 15 pm: City tour with a guide (3 groups).

7: 15 pm -> 8:30 pm: Tour on boat on the Garonne.

8:30 pm -> 10:30 pm: Dinner at Café du Port.

Sep
23
Wed
Invited talk: Felix Wolf
Sep 23 @ 9:00 am – 10:00 am

Location: Inria building.

Prof. Felix Wolf
Department of Computer Science of TU Darmstadt

Is your software ready for exascale? – How the next generation of performance tools can give you the answer
Traditional performance tools do a great job of evaluating the performance of a code for a given execution configuration on an existing machine. However, foresighted developers want to validate their design decisions and implementations also for future problem and machine sizes – and if needed revise their code as early as possible. In this talk, we will discuss new techniques for automated performance modeling and present case studies demonstrating their potential. As one of our examples, we will show how to systematically validate the scalability of state-of-the-art MPI implementations and put existing performance expectations to the test.

Coffee break + poster session
Sep 23 @ 10:00 am – 10:30 am

Location: Inria building.

Session 3 : Collective Communications
Sep 23 @ 10:30 am – 12:00 pm

Location: Inria building
Session chair: Atsushi Hori

-“On the Impact of Synchronizing Clocks and Processes on Benchmarking MPI Collectives”, Sascha Hunold and Alexandra Carpen-Amarie

-“GPU-Aware Design, Implementation, and Evaluation of Non-blocking Collective Benchmarks”, Ammar Ahmad Awan, Hari Subramoni, Khaled Hamidouche, Akshay Venkatesh, Jonathan Perkins and Dhabaleswar K. Panda

-“Isomorphic, Sparse MPI-like Collective Communication Operations for Parallel Stencil Computations”, Jesper Larsson Träff, Felix Lübbe, Antoine Rougier and Sascha Hunold

Lunch: Buffet
Sep 23 @ 12:00 pm – 1:30 pm

Location: Inria building, Ada Lovelace room.

Industrial talk
Sep 23 @ 1:30 pm – 2:30 pm

Location: Inria building

https://eurompi2015.bordeaux.inria.fr/program/industrial-talks/

Dr. Jeff Squyres
Cisco Systems, Inc

Not Another Boring Vendor Talk
Who wants to hear a boring talk about how great a vendor’s products are? You already know that Cisco’s products are great — and you should go buy some — so let’s talk about interesting technical stuff, instead.
This talk will focus on two main issues. First, current thoughts on revamping MPI initialization, finalization, and global state (e.g., MPI_COMM_WORLD). In particular, looking towards fixing many of the issues that were not able to be solved by MPI-3.1’s declaring MPI_INITIALIZED and friends always thread safe. This is basically an “inside look” at how research and evolution discussions occur within the MPI Forum itself.
Second, this talk will discuss Cisco’s evolution and journey from the legacy Verbs API to the modern community-based Libfabric API for support of our OS-bypass usNIC device (“Userspace NIC”). Emphasis will be placed on discussing *why* we chose to make this journey (it cost time and money, after all), and the roadmap for the libfabric effort.

Biography
Dr. Jeff Squyres is Cisco’s representative to the MPI Forum standards body and is Cisco’s core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990’s, and is a chapter author of the MPI-2 and MPI-3 standards. Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006. In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco’s virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Session 4, part 1 : Fault Tolerance
Sep 23 @ 2:30 pm – 3:30 pm

Location: Inria building
Session chair: Brice Goglin

-“Plan B: Interruption of Ongoing MPI Operations to Support Failure Recovery”, “Aurelien Bouteiller, George Bosilca and Jack Dongarra

-“Detecting Silent Data Corruption for Extreme-Scale MPI Applications”, Leonardo Arturo Bautista Gomez and Franck Cappello

Coffee break + poster session
Sep 23 @ 3:30 pm – 4:00 pm

Location: Inria building

Session 4, part 2 : Fault Tolerance
Sep 23 @ 4:00 pm – 5:00 pm

Location: Inria building
Session chair: Brice Goglin

-“Scalable and Fault Tolerant Failure Detection and Consensus”, Amogh Katti, Giuseppe Di Fatta, Thomas Naughton and Christian Engelmann

-“Sliding Substitution of Failed Nodes”, Atsushi Hori, Kazumi Yoshinaga, Thomas Hérault, Aurélien Bouteiller and George Bosilca and Yutaka Ishikawa

Closing remarks + Misc.
Sep 23 @ 5:00 pm – 5:45 pm

Location: Inria building

Sep
24
Thu
MPI Forum
Sep 24 @ 9:00 am – 12:00 pm

Location: Inria building, Ada Lovelace room.

Lunch
Sep 24 @ 12:00 pm – 2:00 pm
MPI Forum
Sep 24 @ 2:00 pm – 6:00 pm

Location: Inria building, Ada Lovelace room.

Sep
25
Fri
MPI Forum
Sep 25 @ 9:00 am – 12:00 pm

Location: Inria building, Ada Lovelace room.

Lunch
Sep 25 @ 12:00 pm – 2:00 pm
MPI Forum
Sep 25 @ 2:00 pm – 6:00 pm

Location: Inria building, Ada Lovelace room.