gpt4 book ai didi

c - 使用相同的通信器连续调用 MPI_Scatter

转载 作者:太空宇宙 更新时间:2023-11-04 03:32:48 25 4
gpt4 key购买 nike

我正在尝试通过同一个通信器,使用非阻塞版本的通信,将两个不同的、独立的数组从等级 0 分散到所有其他数组。

沿着这些线的东西:

//do some stuff with arrays here...
MPI_IScatterv(array1, partial_size1, displs1,
MPI_DOUBLE, local1, partial_size1,
MPI_DOUBLE, 0, some_communicator, &request);
MPI_IScatterv(array2, partial_size2, displs2,
MPI_DOUBLE, local2, partial_size2,
MPI_DOUBLE, 0, some_communicator, &request);
//do some stuff where none of the arrays is needed...
MPI_Wait(&request, &status);
//do stuff with the arrays...

所以...是否有可能(或者更确切地说,如果它保证始终没有错误)使用同一个通信器对 MPI_IScatterv 进行两次连续调用,或者可能会影响结果 - 弄乱来自两个分散的消息, 因为没有标签?

最佳答案

是的,根据 MPI standard 可以一次执行多个非阻塞集体操作.特别是在第 197 页的第 5.12 节中。非阻塞集体操作:

Multiple nonblocking collective operations can be outstanding on a single communicator. If the nonblocking call causes some system resource to be exhausted, then it will fail and generate an MPI exception. Quality implementations of MPI should ensure that this happens only in pathological cases. That is, an MPI implementation should be able to support a large number of pending nonblocking operations.

尽管如此,请确保对 MPI_Iscatterv() 的连续调用使用不同的 request。函数MPI_Waitall()用于检查多个非阻塞操作的完成情况。

MPI_Request requests[2];
MPI_Iscatterv(...,&requests[0]);
MPI_Iscatterv(...,&requests[1]);
MPI_Waitall(2,requests,...);

显示如何完成的示例代码:

#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
#include <math.h>

int main(int argc, char *argv[]) {

MPI_Request requests[42];

MPI_Init(&argc,&argv);

int size,rank;
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);

int version,subversion;
MPI_Get_version( &version, &subversion );

if(rank==0){printf("MPI version %d.%d\n",version,subversion);}

//dimensions
int nbscatter=5;
int nlocal=2;
double* array=NULL;
int i,j,k;
//build a 2D array of nbscatter lines and nlocal*size columns on root process
if(rank==0){
array=malloc(nlocal*nbscatter*size*sizeof(double));
if(array==NULL){printf("malloc failure\n");}


for(i=0;i<nbscatter;i++){
for(j=0;j<size*nlocal;j++){
array[i*size*nlocal+j]=j+0.01*i;
printf("%lf ",array[i*size*nlocal+j]);
}
printf("\n");
}

}

//on each process, a 2D array of nbscatter lines and nlocal columns
double* arrayloc=malloc(nlocal*nbscatter*sizeof(double));
if(arrayloc==NULL){printf("malloc failure2\n");}

//counts and displacements
int* displs;
int* scounts;
displs = malloc(nbscatter*size*sizeof(int));
if(displs==NULL){printf("malloc failure\n");}
scounts = malloc(nbscatter*size*sizeof(int));
if(scounts==NULL){printf("malloc failure\n");}

for(i=0;i<nbscatter;i++){
for(j=0;j<size;j++){
displs[i*size+j]=j*nlocal;
scounts[i*size+j]=nlocal;
}

// scatter the lines
if(rank==0){
MPI_Iscatterv(&array[i*nlocal*size], &scounts[i*size], &displs[i*size],MPI_DOUBLE,&arrayloc[i*nlocal], nlocal,MPI_DOUBLE, 0, MPI_COMM_WORLD, &requests[i]);
}else{
MPI_Iscatterv(NULL, &scounts[i*size], &displs[i*size],MPI_DOUBLE,&arrayloc[i*nlocal], nlocal,MPI_DOUBLE, 0, MPI_COMM_WORLD, &requests[i]);
}
}

MPI_Status status[nbscatter];
if(MPI_Waitall(nbscatter,requests,status)!=MPI_SUCCESS){
printf("MPI_Waitall() failed\n");
}

if(rank==0){
free(array);
}
free(displs);
free(scounts);

//print the local array, containing the scattered columns
for(k=0;k<size;k++){
if(rank==k){
printf("on rank %d\n",k);
for(i=0;i<nbscatter;i++){
for(j=0;j<nlocal;j++){
printf("%lf ",arrayloc[i*nlocal+j]);
}
printf("\n");
}

}

MPI_Barrier(MPI_COMM_WORLD);
}

free(arrayloc);


MPI_Finalize();



return 0;
}

mpicc main.c -o main -Wall编译并由mpirun -np 4 main运行

关于c - 使用相同的通信器连续调用 MPI_Scatter,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34961503/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com