PARALLEL AND DISTRIBUTED COMPUTING MESSAGE PASSING INTERFACE (MPI)
MPI INTRODUCTION  The first difference is in the price of communication, time needed to exchange certain amount of data between processors  MPI is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory
MPI (MESSAGE PASSING INTERFACE)  MPI is not a language but all MPI operations are expressed as functions or subroutines  MPI Standard defines the syntax and semantics of operations  MPI Program consists of autonomous process that are able to execute their own code in the sense of MIMD
MPI (MESSAGE PASSING INTERFACE)  MPI provides at least two operations  send(message) and receive(message) Message sent by a process can be of either fixed or variable size Fixed Size: system level implementation is straightforward but make the task of programming more difficult Variable size: system level implementation is difficult but makes task programming more simpler.
MPI (MESSAGE PASSING INTERFACE)  If process P and Q want to communicate, they send messages to and receive messages from each other  So we need a communication link between them
HELLO WORD #include <stdio.h> #include <mpi.h> main(int argc, char **argv) { MPI_Init(&argc, &argv); printf("Hello worldn"); MPI_Finalize(); }
 Header file mpi.h must be included to compile MPI Code.  MPI-Init() from this point processes can collaborate, send/receive message until MPI-Finalize()  finalizing leads to freeing all the resources reserved by MPI.
BASIC DATA TYPES RECOGNIZED BY MPI MPI DATATYPE HANDLE C DATATYPE MPI_INT Int MPI_SHORT Short MPI_LONG Long MPI_FLOAT Float MPI_DOUBLE Double MPI_CHAR Char
 MPI also provides routines that let the process determine its process ID, as well as the number of processes that have been created.
 MPI_Comm_size(): returns total number of processes  MPI_Comm_rank(): returns process id that called the function MPI rank us used to specify a particular process  It is an integer range from 0 to n  It is necessary for a process to know it rank.
MPI COMMUNICATOR  A process group and context together form an MPI Communicator  MPI Communicator – holds a group of processes that can communicate with each other.  MPI_COMM_WORLD is default communicator that contains all processes available for use.
SEND AND RECEIVE MPI  Process A decides to sent message to process B. Process A pack up all information into buffer and sent. Process A acknowledged that data has transmitted
SYNTAX OF SEND OPERATION MPI_SEND (buf, count, datatype, dest, tag, comm) MPI_SEND will not complete until a matching MPI-RECV Identified  Buf - pointer to send buffer, data to send  Count - number of data item (non negative)  Datatype - type of data  Dest - receiver address  Tag - message tag
SYNTAX OF RECEIVE OPERATION  MPI_RECV (buf, count, datatype, dest, tag, comm, status) MPI_SEND will not complete until a matching MPI-RECV Identified  Buf - pointer to send buffer, data to send  Count - number of data item (non negative)  Datatype - type of data  Dest - receiver address  Tag - message tag  Comm- communicator (handle)  Status – contains furter information about
MPI PROGRAM: PROCESS 1TO SEND MESSAGETO PROCESS 2  int main(int argc, char** argv) {  int process_Rank, size_Of_Cluster, message_Item;  MPI_Init(&argc, &argv);  MPI_Comm_size(MPI_COMM_WORLD, &size_Of_Cluster);  MPI_Comm_rank(MPI_COMM_WORLD, &process_Rank);  if(process_Rank == 0){  message_Item = 42;  MPI_Send(&message_Item, 1, MPI_INT, 1, 1, MPI_COMM_WORLD);  printf("Message Sent: %dn", message_Item); }  else if(process_Rank == 1){  MPI_Recv(&message_Item, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);  printf("Message Received: %dn", message_Item); }  MPI_Finalize(); }
COLLECTIVE MPI COMMUNICATION  MPI Collective operations are called by all processes in a communicator  Following are some collective MPI Collective Operations  MPI_BARRIER  MPI_BCAST  MPI_SCATTER  MPI_GATHER
MPI BARRIERS  Like many other programming utilities, MPI-Barrier is a process lock that holds each process at a certain line of code until all processes have reach that line in code.  MPI_Barrier can be called as such:  MPI_Barrier(MPI_Comm comm)
MPI_BCAST  Implements a one-to-all broadcast operation  Root process sends its data to all other processes  MPI_BCAST(inbuf, incnt, intype, root, comm)  Inbuf: consisting of input data  Incnt: count number of data  Intype: type of data
MPI_GATHER  All-to-one operator, also called by all process in the communicator.  Gather data from participating processes into a single structure
MPI_SCATTER  Break a structure into portions and distribute those portions to other processes  Inverse of MPI_Gather  Data is scattered to other processes into equal parts
COLLECTIVE MPI DATA MANIPULATIONS  MPI Provides a set of operations that performs simple manipulations on the transferred data  Manipulation are based on data reduction paradigm that reduce data into smaller set of data.
COLLECTIVE MPI DATA MANIPULATIONS  MPI_Max, MPI_Min: return maximum or minimum of data item  MPI_Sum, MPI_Prod: Return sum or product of all data items  MPI_LAND, MPI_LOR, MPI_BAND, MPI_BOR: return logical or bitwise operation across data.
MPI REDUCTION  The MPI Operation that implements al kind of data reduction is  MPI_Reduce: Works similar to MPI_Gather followed by manipulation operation in process root.
MPI REDUCTION  The MPI Operation that implements al kind of data reduction is  MPI_AllReduce: Works as MPI_Reduce followed by MPI_Bcast  Final result has to be available to all process
POINT-TO-POINT COMMUNICATION (PING-PONG)  PING-PONG Communication is also called as non-blocking communication  Involves sending and receiving between two process back-and- forth MPI Communication  Ping pong communication starts by using mpiexec command

Parallel and Distributed Computing Chapter 10

  • 1.
    PARALLEL AND DISTRIBUTEDCOMPUTING MESSAGE PASSING INTERFACE (MPI)
  • 2.
    MPI INTRODUCTION  Thefirst difference is in the price of communication, time needed to exchange certain amount of data between processors  MPI is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory
  • 3.
    MPI (MESSAGE PASSINGINTERFACE)  MPI is not a language but all MPI operations are expressed as functions or subroutines  MPI Standard defines the syntax and semantics of operations  MPI Program consists of autonomous process that are able to execute their own code in the sense of MIMD
  • 4.
    MPI (MESSAGE PASSINGINTERFACE)  MPI provides at least two operations  send(message) and receive(message) Message sent by a process can be of either fixed or variable size Fixed Size: system level implementation is straightforward but make the task of programming more difficult Variable size: system level implementation is difficult but makes task programming more simpler.
  • 5.
    MPI (MESSAGE PASSINGINTERFACE)  If process P and Q want to communicate, they send messages to and receive messages from each other  So we need a communication link between them
  • 6.
    HELLO WORD #include <stdio.h> #include<mpi.h> main(int argc, char **argv) { MPI_Init(&argc, &argv); printf("Hello worldn"); MPI_Finalize(); }
  • 7.
     Header filempi.h must be included to compile MPI Code.  MPI-Init() from this point processes can collaborate, send/receive message until MPI-Finalize()  finalizing leads to freeing all the resources reserved by MPI.
  • 8.
    BASIC DATA TYPESRECOGNIZED BY MPI MPI DATATYPE HANDLE C DATATYPE MPI_INT Int MPI_SHORT Short MPI_LONG Long MPI_FLOAT Float MPI_DOUBLE Double MPI_CHAR Char
  • 9.
     MPI alsoprovides routines that let the process determine its process ID, as well as the number of processes that have been created.
  • 11.
     MPI_Comm_size(): returnstotal number of processes  MPI_Comm_rank(): returns process id that called the function MPI rank us used to specify a particular process  It is an integer range from 0 to n  It is necessary for a process to know it rank.
  • 12.
    MPI COMMUNICATOR  Aprocess group and context together form an MPI Communicator  MPI Communicator – holds a group of processes that can communicate with each other.  MPI_COMM_WORLD is default communicator that contains all processes available for use.
  • 13.
    SEND AND RECEIVEMPI  Process A decides to sent message to process B. Process A pack up all information into buffer and sent. Process A acknowledged that data has transmitted
  • 14.
    SYNTAX OF SENDOPERATION MPI_SEND (buf, count, datatype, dest, tag, comm) MPI_SEND will not complete until a matching MPI-RECV Identified  Buf - pointer to send buffer, data to send  Count - number of data item (non negative)  Datatype - type of data  Dest - receiver address  Tag - message tag
  • 15.
    SYNTAX OF RECEIVEOPERATION  MPI_RECV (buf, count, datatype, dest, tag, comm, status) MPI_SEND will not complete until a matching MPI-RECV Identified  Buf - pointer to send buffer, data to send  Count - number of data item (non negative)  Datatype - type of data  Dest - receiver address  Tag - message tag  Comm- communicator (handle)  Status – contains furter information about
  • 16.
    MPI PROGRAM: PROCESS1TO SEND MESSAGETO PROCESS 2  int main(int argc, char** argv) {  int process_Rank, size_Of_Cluster, message_Item;  MPI_Init(&argc, &argv);  MPI_Comm_size(MPI_COMM_WORLD, &size_Of_Cluster);  MPI_Comm_rank(MPI_COMM_WORLD, &process_Rank);  if(process_Rank == 0){  message_Item = 42;  MPI_Send(&message_Item, 1, MPI_INT, 1, 1, MPI_COMM_WORLD);  printf("Message Sent: %dn", message_Item); }  else if(process_Rank == 1){  MPI_Recv(&message_Item, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);  printf("Message Received: %dn", message_Item); }  MPI_Finalize(); }
  • 17.
    COLLECTIVE MPI COMMUNICATION MPI Collective operations are called by all processes in a communicator  Following are some collective MPI Collective Operations  MPI_BARRIER  MPI_BCAST  MPI_SCATTER  MPI_GATHER
  • 18.
    MPI BARRIERS  Likemany other programming utilities, MPI-Barrier is a process lock that holds each process at a certain line of code until all processes have reach that line in code.  MPI_Barrier can be called as such:  MPI_Barrier(MPI_Comm comm)
  • 19.
    MPI_BCAST  Implements aone-to-all broadcast operation  Root process sends its data to all other processes  MPI_BCAST(inbuf, incnt, intype, root, comm)  Inbuf: consisting of input data  Incnt: count number of data  Intype: type of data
  • 20.
    MPI_GATHER  All-to-one operator,also called by all process in the communicator.  Gather data from participating processes into a single structure
  • 21.
    MPI_SCATTER  Break astructure into portions and distribute those portions to other processes  Inverse of MPI_Gather  Data is scattered to other processes into equal parts
  • 22.
    COLLECTIVE MPI DATAMANIPULATIONS  MPI Provides a set of operations that performs simple manipulations on the transferred data  Manipulation are based on data reduction paradigm that reduce data into smaller set of data.
  • 23.
    COLLECTIVE MPI DATAMANIPULATIONS  MPI_Max, MPI_Min: return maximum or minimum of data item  MPI_Sum, MPI_Prod: Return sum or product of all data items  MPI_LAND, MPI_LOR, MPI_BAND, MPI_BOR: return logical or bitwise operation across data.
  • 24.
    MPI REDUCTION  TheMPI Operation that implements al kind of data reduction is  MPI_Reduce: Works similar to MPI_Gather followed by manipulation operation in process root.
  • 25.
    MPI REDUCTION  TheMPI Operation that implements al kind of data reduction is  MPI_AllReduce: Works as MPI_Reduce followed by MPI_Bcast  Final result has to be available to all process
  • 26.
    POINT-TO-POINT COMMUNICATION (PING-PONG) PING-PONG Communication is also called as non-blocking communication  Involves sending and receiving between two process back-and- forth MPI Communication  Ping pong communication starts by using mpiexec command