Skip to content
Snippets Groups Projects
Commit dbbf645f authored by Chao Zhan's avatar Chao Zhan
Browse files

add more MPI examples

parent 54ae99ed
Branches
No related tags found
No related merge requests found
PROG = reduction
OBJECTS = reduction.o
CC = mpicc
CFLAGS = -Wall -std=c11
CFLAGS += -I. # add the current directory to the include path
$(PROG): $(OBJECTS) # link the object files into a binary
$(CC) $(CFLAGS) $^ -o $@
.Phone: run
run: $(PROG) # build and run the program
mpirun ./$(PROG)
$(OBJECTS): %.o: %.c # compile the source files into object files
$(CC) $(CFLAGS) -c $<
.PHONY: clean
clean: # remove the object files and the binary
rm -f $(OBJECTS) $(PROG)
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#define SCATTER_NUM 10
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
// Get the number of processes
int num_processes;
MPI_Comm_size(MPI_COMM_WORLD, &num_processes);
int *sendbuf = malloc(SCATTER_NUM * sizeof(int));
int *recvbuf = malloc(SCATTER_NUM * sizeof(int));
// Get the rank of the process
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
for (int i = 0; i < SCATTER_NUM; i++) {
sendbuf[i] = i + 1;
}
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
MPI_Reduce(sendbuf, recvbuf, SCATTER_NUM, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("Process %d of %d on %s received with reduction: [%d] = { ", rank, num_processes,
processor_name, SCATTER_NUM);
for (int i = 0; i < SCATTER_NUM; i++) {
printf("%d, ", recvbuf[i]);
}
printf("}\n");
} else {
printf("Process %d of %d on %s sent: sendbuf[%d] = { ", rank, num_processes,
processor_name, SCATTER_NUM);
for (int i = 0; i < SCATTER_NUM; i++) {
printf("%d, ", sendbuf[i]);
}
printf("}\n");
}
// Finalize the MPI environment. No more MPI calls can be made after this
MPI_Finalize();
}
slides/images/MPI-all-to-all.png

62.7 KiB

slides/images/MPI-gather-to-all.png

60.2 KiB

slides/images/MPI-global-reduction.png

73.9 KiB

...@@ -411,3 +411,70 @@ MPI_Gather (void *sendbuf, int sendcount, MPI_Datatype sendtype, ...@@ -411,3 +411,70 @@ MPI_Gather (void *sendbuf, int sendcount, MPI_Datatype sendtype,
- The opposite operation of **MPI_Scatter** - The opposite operation of **MPI_Scatter**
- root also receives one data chunk from itself - root also receives one data chunk from itself
- data chunks are stored in increasing order of the sender’s rank - data chunks are stored in increasing order of the sender’s rank
---
title: Gather-to-All
---
## Gather-to-All
Collect chunks of data from all ranks in all ranks:
```c
MPI_Allgather (void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
```
<div class="container flex justify-center mt-5">
<img src="/images/MPI-gather-to-all.png" class="block w-lg"/>
</div>
### Notes
- each rank distributes its **sendbuf** to every rank in the communicator
- almost equivalent to **MPI_Scatter** + **MPI_Gather**
---
title: All-to-All
---
## All-to-All
Combined scatter and gather operation:
```c
MPI_Alltoall (void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
```
<div class="container flex justify-center mt-5">
<img src="/images/MPI-all-to-all.png" class="block w-lg"/>
</div>
### Notes
- a kind of global chunked transpose
---
title: Global Reduction
---
## Global Reduction
Perform an arithmetic reduction operation while gathering data:
```c
MPI_Reduce (void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
```
<div class="container flex justify-center mt-5">
<img src="/images/MPI-global-reduction.png" class="block w-sm"/>
</div>
### Notes
- Result is computed **in- or out-of-order** depending on the operation
- **All predefined operations are associative and commutative**
- **Beware of non-commutative effects on floats**
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment