Home

EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and has been held in European countries. Following a 21st edition that took place in Japan, the 22nd meeting edition shall be back on European soil, in Bordeaux, France.

EuroMPI 2015 will continue to focus on not just MPI, but also extensions or alternative interfaces for high-performance homogeneous/heterogeneous /hybrid systems, benchmarks, tools, parallel I/O, fault tolerance, and parallel applications using MPI and other interfaces. Through the presentation of contributed papers, poster presentations and invited talks, attendees will have the opportunity to share ideas and experiences to contribute to the improvement and furthering of message-passing and related parallel programming paradigms. In addition to the main conference’s technical program, one-day or half-day workshops will be held. The Call for Workshops is announced separately and also shown in the conference page.

Topics of interest for the meeting include, but are not limited to:

  • MPI implementation issues and improvements towards exascale computing, such as manycores, GPGPU, and heterogeneous architectures.
  • Extensions to and shortcomings of MPI.
  • Hybrid and heterogeneous programming with MPI and other interfaces.
  • Interaction between message-passing software and hardware, in particular new high performance architectures.
  • MPI support for data-intensive parallel applications.
  • New MPI-IO mechanisms and I/O stack optimizations.
  • Fault tolerance in message-passing implementations and systems.
  • Performance evaluation for MPI and MPI based applications.
  • Automatic performance tuning of MPI applications and implementations.
  • Verification of message passing applications and protocols.
  • Applications using message-passing, in particular in Computational Science and Scientific Computing.
  • Parallel algorithms in the message-passing paradigm.
  • New programming paradigms implemented over MPI, like hierarchical programming and global address spaces
  • MPI parallel programming in clouds
  • MPI applications performance on clouds