Message Passing Interface (MPI) : MPI_Scatter example

This post talks about an MPI function – MPI_Scatter. MPI_Scatter is a collective operation in the Message Passing Interface (MPI) used in parallel programming. It takes data from a process and distributes chunks to others in a communicator.

Syntax for MPI_Scatter

int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm);

Input Parameters

  • sendbuf : Pointer to the data/array to be sent (for example : int *mydata)
  • sendcount : number of elements/values to be sent (for example : 0)
  • sendtype : data type of elements/values to be sent (for example : MPI_INT)
  • recvcount : number of elements/values to be received (for example : 0)
  • recvtype : data type of elements/values to be received (for example : MPI_INT)
  • root : rank of root process (for example : 0)
  • comm : communicator (for example : MPI_COMM_WORLD)

Output Parameters

  • recvbuf : Pointer to the data/array to be received (for example : int *mydata)

Example code

The code is hard-coded to run for 4 processes, but it can be easily generalized to use any number of processes.

#include"stdio.h"
#include"mpi.h"
#include<stdlib.h>

#define ARRSIZE 12


int main(int argc, char **argv)
{
	int myid, size;
	int i;
	int data[ARRSIZE];
	int receive_data[ARRSIZE/4];	
	
	//Initialize MPI environment 
	MPI_Init(&argc,&argv);
	
	//Get total number of processes
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	
	//Get my unique identification among all processes
	MPI_Comm_rank(MPI_COMM_WORLD, &myid);
	
	//Exit the code if the number of processes is not equal to 4
	if(size!=4)
	{
		printf("\n Please use EXACTLY 4 processes!\n");
		MPI_Finalize();
		exit(0);
	}
	
	//If root
	if(myid==0)
	{
		//Initialize data to some value
		for(i=0;i<ARRSIZE;i++)
		{
			data[i] = i;			
		}
		
		//print the data
		printf("\nInitial data: ");
		for(i=0;i<ARRSIZE;i++)
		{
			printf(" %d", data[i]);			
		}
	}
	//Distribute / Scatter the data from root = 0
	MPI_Scatter(&data, ARRSIZE/4, MPI_INT, &receive_data, ARRSIZE/4, MPI_INT, 0, MPI_COMM_WORLD);
	
	//print the data
	printf("\n myid : %d : I have received %d, %d, and %d.", myid, receive_data[0], receive_data[1], receive_data[2]);			

	//End MPI environment        
	MPI_Finalize();
}

To compile this code, use following command –

mpicc scatter.c

To execute this code, run following command –

mpiexec -n 4 ./a.out

Output of this code will be something similar to the following –

Initial data:  0 1 2 3 4 5 6 7 8 9 10 11
 myid : 0 : I have received 0, 1, and 2.
 myid : 1 : I have received 3, 4, and 5.
 myid : 2 : I have received 6, 7, and 8.
 myid : 3 : I have received 9, 10, and 11.

To know more about MPI, visit our dedicated page for MPI here.

If you are new to Parallel Programming / HPC, visit our dedicated page for beginners.

References