Lukas Beran
Lukas Beran

Welcome to my blog! If you're looking for tutorials, hints or tips for IT, you're right here. You will find mostly articles on Microsoft products and technologies - operating systems, servers, virtualization, networks, management, but also the cloud. Sometimes I add some other interesting things.

July 2013


How to create MPI tasks

Lukas BeranLukas Beran

In the last article I published a guidance how to create computing cluster based on Microsoft HPC. In today’s article I will show you how to create MPI tasks, which can run on this cluster.

MPI specification

In computing clusters is used Message Passing Interface (MPI) specification which implements the same name protocol for parallel computing in computing clusters.

Final specification of MPI-1.0 was published in May 1994, current version MPI-3.0 was approved in September 2012.

MPI was originally designed for architectures with distributed memory. But when the trends was changing, systems with shared memory were connected by network to systems with hybrid distributed memory. MPI supports all architectures – distributed memory, shared memory and hybrid scenarios.From the ISO/OSI model, this protocol becomes to the fifth (relational) layer and the most of implementations use TCP protocol. Data are moved from address space of one process to address space of another proces by cooperative operations of both processes. API Message Passing Interface is dependent on programming language because it uses network protocol. The most used languages are C, C++, Java, Python or Fortran.

MPI applications

In the following part I will briefly describe basic behavior of MPI applications with a focus on C language

Head files

All MPI applications have to have mpi.h head file.

MPI volání

MPI routines use prefix MPI_.

Communicators and groups

MPI uses objects called communicators and groups for determination of collections which can communicate all together. Most of MPI routines require communicator specification in their argument.


Within the communicator, each process has unique integer identifier, which is assigned to the process by system during initialization. Rank numbers starts with 0 and are continuous. Ranks are used for determination of message source and destination.


Most of MPI routines have return/error code parameter.

Example of MPI application

This example is for computing of number π in C and uses MPI.

Head files, functions main, declarations of variables.

MPI initialization. This function must be before any other MPI functions and must be only once in an app.

Vrací celkové množství MPI procesů ve specifikovaném komunikátoru MPI_COMM_WORLD.

Vrací rank volaného MPI procesu ve specifikovaném komunikátoru.

Arguments validation and conversion of the second argument to integer.

Sending the data (value n) to all processes in the group MPI_COMM_WORLD. Source process is process with rank 0.

Collecting of results and connecting them.

Termination of MPI environment. This is the last MPI routine.

Here is the whole code:

More information and examples can be found at Livermore Computing Center.

My primary focus is the security of identities, devices and data in the cloud using Microsoft services, technologies and tools.

Comments 0
There are currently no comments.