Message Passing Interface (MPI) : MPI_Gather example

by

in

This post talks about Reduction operation in MPI using MPI_Gather. MPI_Gather is a collective operation in the Message Passing Interface (MPI) used to collect data from multiple processes and combine it into a single process.

Syntax for MPI_Gather

int MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

Input Parameters

  • sendbuf : Pointer to the data/array to be sent (for example : int *mydata)
  • sendcount : number of elements/values to be sent (for example : 10)
  • sendtype : data type of elements/values to be sent (for example : MPI_INT)
  • recvcount : number of elements/values to be received (for example : 10)
  • recvype : data type of elements/values to be received (for example : MPI_INT)
  • root : rank of root process (for example : 0)
  • comm : communicator (for example : MPI_COMM_WORLD)

Output Parameters

  • recvbuf : Pointer to the data/array to be received (for example : int *myarr)

Example code –

The code is hard-coded to run for 4 processes, but it can be easily generalized to use any number of processes.

#include"stdio.h"
#include"mpi.h"
#include<stdlib.h>

#define ARRSIZE 12


int main(int argc, char **argv)
{
	int myid, size;
	int i;
	int data[ARRSIZE];
	int send_data[ARRSIZE/4];	
	
	//Initialize MPI environment 
	MPI_Init(&argc,&argv);
	
	//Get total number of processes
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	
	//Get my unique identification among all processes
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
	
	//Exit the code if the number of processes is not equal to 4
	if(size!=4)
	{
		printf("\n Please use EXACTLY 4 processes!\n");
		MPI_Finalize();
		exit(0);
	}
	
	//Initialize data to some value
	for(i=0;i<ARRSIZE/4;i++)
	{
		send_data[i] = i + 3*myid;			
	}
		
	//print the data
	printf("\n myid : %d : I have %d, %d, and %d.", myid, send_data[0], send_data[1], send_data[2]);	
	
	// Wait for all the processes to print their data
	MPI_Barrier(MPI_COMM_WORLD);
	
	//Collect / Gather the data at root = 0 
	MPI_Gather(&send_data, ARRSIZE/4, MPI_INT, &data, ARRSIZE/4, MPI_INT, 0, MPI_COMM_WORLD);
	
	// Root prints the data
	if(myid==0)
	{
		printf("\nFinal data: ");
		for(i=0;i<ARRSIZE;i++)
		{
			printf(" %d", data[i]);			
		}		
	}
	//End MPI environment        
	MPI_Finalize();
}

To compile this code, use following command –

mpicc gather.c

To execute this code, run following command –

mpiexec -n 4 ./a.out

Output of this code will be something similar to the following –


 myid : 1 : I have 3, 4, and 5.
 myid : 3 : I have 9, 10, and 11.
 myid : 2 : I have 6, 7, and 8.
 myid : 0 : I have 0, 1, and 2.
Final data:  0 1 2 3 4 5 6 7 8 9 10 11

To know more about MPI, visit our dedicated page for MPI here.

If you are new to Parallel Programming / HPC, visit our dedicated page for beginners.

References :


Mandar Gurav Avatar

Mandar Gurav

Parallel Programmer, Trainer and Mentor


If you are new to Parallel Programming you can start here.



Beginner CUDA Fortran Hello World Message Passing Interface MPI Nvidia Nsight Systems NVPROF OpenACC OpenACC Fortran OpenMP PGI Fortran Compiler Profiling Vector Addition


Popular Categories