vicesandy/MPIprogramming
Implement a distributed linear search algorithm in a master-slave configuration. The master has a local array of integers (of length = SIZE) which it partitions into chunks and distributes among the ‘n’ slave nodes. If SIZE is not divisible by n, then the last few remaining elements are sent to the last slave processor (i.e., the one with rank = n). The master also broadcasts a list of integers to all the slaves – this array is the query list. For each element e in this query list, each slave searches its locally received chunk from the master to determine whether e exists in its received chunk. At the end of the search, each slave sends to the master only the list of elements from the query list which it found in its chunk. The program flow should follow the below sequence: 1. Master broadcasts query list to all the slave nodes. 2. Master then issues ‘n’ non-blocking receive calls and waits for an ACK message from each slave to indicate that it has received the query list using MPI_Waitall(). 3. On receiving all ACKS from the slave nodes, the master then sends to each slave (using blocking MPI send) a chunk of size = SIZE/n the search array. Slave ‘i’ (1 <= i < n) receives chunk with start_index = (i-1) * size and end_index = start_index + size – 1. Slave ‘n’ receives size + SIZE % n elements. 4. Master then issues ‘n’ non-blocking receive calls and waits for response from each slave using either MPI_Waitall() or MPI_Waitany() [ MPI_Waitany() preferred over MPI_Waitall() ]. 5. Each slave sends using a blocking MPI send, ONLY the list of integers from the query list that it finds in the chunk received from the master. 6. Master then prints out the list of elements found by each slave, one line per each slave. NOTE: You would need to use MPI_Probe() and MPI_Get_Count() to determine the number of elements received (at both the slaves and the master). Also, note that MPI_Request is a struct with a field named MPI_SOURCE that identifies which node sent a message. MPI_ANY_TAG and MPI_ANY_SOURCE are two other useful constants provided by the OpenMPI library. A starting code snippet will be provided so that everyone uses a uniform message tag and array sizes.
C