This post talks about Reduction operation in MPI using MPI_Allreduce. MPI_Allreduce is a collective communication operation in the Message Passing Interface (MPI) that combines values from all processes in a communicator and distributes the result back to all processes.
Syntax for MPI_Allreduce
int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
Input Parameters
- sendbuf : Pointer to the data/array to be sent (for example : int *mydata)
- count : number of elements/values to be reduced (for example : 10)
- datatype : data type of elements/values to be reduced (for example : MPI_INT)
- op : reduction operation to be performed (for example – MPI_SUM)
- comm : communicator (for example : MPI_COMM_WORLD)
Output Parameters
- recvbuf : Pointer to the data/array to be received (for example : int *myarr)
Example code –
The code is hard-coded to run for 4 processes, but it can be easily generalized to use any number of processes.
#include"stdio.h"
#include"mpi.h"
int main(int argc, char **argv)
{
int myid, size;
int sum;
sum = 100;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
// Calculate the sum of all process IDs.
MPI_Allreduce(&myid, &sum, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
printf("\n myid = %d, sum = %d", myid, sum);
MPI_Finalize();
}
To compile this code, use following command –
mpicc allreduce.c
To execute this code, run following command –
mpiexec -n 4 ./a.out
Output of this code will be something similar to the following –
myid = 0, sum = 6
myid = 1, sum = 6
myid = 2, sum = 6
myid = 3, sum = 6
To know more about MPI, visit our dedicated page for MPI here.
If you are new to Parallel Programming / HPC, visit our dedicated page for beginners.
References :