Parallel Programming using MPI

This page is under construction and will be updated regularly.

The Message Passing Interface (MPI) is a standardized and widely-used communication protocol for parallel computing. It enables processes to exchange messages efficiently across different nodes in a distributed computing environment. By providing a set of library routines usable in languages like C, C++, and Fortran, MPI simplifies the development of scalable and efficient parallel programs.

MPI Basics

  • Introduction to MPI
  • MPI Programming Model
  • First MPI Program
  • Rank and Size
  • Process Identification
  • MPI Error Handling
  • Timing and Performance Measurement
  • MPI Program Compilation
  • Running MPI Programs
  • MPI Standards and Versions

Point-to-Point Communication

  • Send and Receive Basics
  • Message Tags and Wildcards
  • MPI Datatypes
  • Communication Modes
  • Blocking Communication Patterns
  • Non-blocking Communication
  • Wait and Test Operations
  • Persistent Communication
  • Buffering and Buffer Management
  • Probe and Message Inquiry

Collective Communication

  • Collective Communication Basics
  • Barrier Synchronization
  • Broadcast Communication
  • Scatter Operation
  • Gather Operation
  • Allgather Operation
  • Reduce Operation
  • Allreduce Operation
  • Reduce-Scatter Operation
  • Scan Operations
  • Alltoall Communication
  • Variable-Size Collectives

Derived Datatypes

  • Why Derived Datatypes
  • Contiguous Datatype
  • Vector Datatype
  • Indexed Datatype
  • Struct Datatype
  • Datatype Commit and Free
  • Subarray Datatype
  • Datatype Extent and Bounds

Advanced Topics

  • Communicator Basics
  • Communicator Creation
  • Communicator Management
  • Process Groups
  • Cartesian Topologies
  • Cartesian Topology Operations
  • Graph Topologies
  • Inter-Communicators
  • Dynamic Process Management
  • Attribute Caching
  • Thread Safety Levels
  • Hybrid MPI+OpenMP

Performance and Debugging

  • Communication Overhead
  • Reducing Communication
  • Load Balancing
  • Profiling MPI Programs
  • Performance Tools
  • Debugging MPI Programs
  • Debugging Tools
  • Memory Debugging
  • Scalability Analysis
  • Communication Patterns Optimization

MPI-IO and One-Sided Communication

  • MPI-IO Basics
  • File Open and Close
  • Individual File Operations
  • Collective File Operations
  • File Views
  • Non-Contiguous I/O
  • One-Sided Communication Basics
  • RMA Windows
  • RMA Data Transfer
  • RMA Synchronization

References:


Mandar Gurav Avatar

Mandar Gurav

Parallel Programmer, Trainer and Mentor


If you are new to Parallel Programming you can start here.



Beginner CUDA Fortran Hello World Message Passing Interface MPI Nvidia Nsight Systems NVPROF OpenACC OpenACC Fortran OpenMP PGI Fortran Compiler Profiling Vector Addition


Popular Categories