Message Passing Interface (MPI) : MPI_Allgather example

by

in

This post talks about MPI_Allgather function in MPI. MPI_Allgather is a collective communication operation in the Message Passing Interface (MPI) that allows all processes in a communicator to gather data from every other process. Unlike MPI_Gather, where only the root process collects the data, in MPI_Allgather, all processes receive the complete data from every other process.

Syntax for MPI_Allgather

int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

Input Parameters

  • sendbuf : Pointer to the data/array to be sent (for example : int *mydata)
  • sendcount : number of elements/values to be sent (for example : 10)
  • sendtype : data type of elements/values to be sent (for example : MPI_INT)
  • recvcount : number of elements/values to be received (for example : 10)
  • recvype : data type of elements/values to be received (for example : MPI_INT)
  • comm : communicator (for example : MPI_COMM_WORLD)

Output Parameters

  • recvbuf : Pointer to the data/array to be received (for example : int *myarr)

Example code –

The code is hard-coded to run for 4 processes, but it can be easily generalized to use any number of processes.

#include"stdio.h"
#include"mpi.h"
#include<stdlib.h>

#define ARRSIZE 12


int main(int argc, char **argv)
{
	int myid, size;
	int i;
	int data[ARRSIZE];
	int send_data[ARRSIZE/4];	
	
	//Initialize MPI environment 
	MPI_Init(&argc,&argv);
	
	//Get total number of processes
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	
	//Get my unique identification among all processes
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
	
	//Exit the code if the number of processes is not equal to 4
	if(size!=4)
	{
		printf("\n Please use EXACTLY 4 processes!\n");
		MPI_Finalize();
		exit(0);
	}
	
	//Initialize data to some value
	for(i=0;i<ARRSIZE/4;i++)
	{
		send_data[i] = i + 3*myid;			
	}
		
	//print the data
	printf("\n myid : %d : I have %d, %d, and %d.", myid, send_data[0], send_data[1], send_data[2]);	
	
	// Wait for all the processes to print their data
	MPI_Barrier(MPI_COMM_WORLD);
	
	//Collect / Gather the data at all processes 
	MPI_Allgather(&send_data, ARRSIZE/4, MPI_INT, &data, ARRSIZE/4, MPI_INT, MPI_COMM_WORLD);
	
	// All the processes now have the data. The process with myid 2 prints the data.
	if(myid==2)
	{
		printf("\nFinal data: ");
		for(i=0;i<ARRSIZE;i++)
		{
			printf(" %d", data[i]);			
		}		
	}
	//End MPI environment        
	MPI_Finalize();
}

To compile this code, use following command –

mpicc allgather.c

To execute this code, run following command –

mpiexec -n 4 ./a.out

Output of this code will be something similar to the following –


 myid : 1 : I have 3, 4, and 5.
 myid : 3 : I have 9, 10, and 11.
 myid : 0 : I have 0, 1, and 2.
 myid : 2 : I have 6, 7, and 8.
Final data:  0 1 2 3 4 5 6 7 8 9 10 11

To know more about MPI, visit our dedicated page for MPI here.

If you are new to Parallel Programming / HPC, visit our dedicated page for beginners.

References :


Mandar Gurav Avatar

Mandar Gurav

Parallel Programmer, Trainer and Mentor


If you are new to Parallel Programming you can start here.



Beginner CUDA Fortran Hello World Message Passing Interface MPI Nvidia Nsight Systems NVPROF OpenACC OpenACC Fortran OpenMP PGI Fortran Compiler Profiling Vector Addition


Popular Categories