gpt4 book ai didi

C: MPI_Allgather 产生错误

转载 作者:太空宇宙 更新时间:2023-11-04 02:32:14 25 4
gpt4 key购买 nike

我首先在每个处理器上生成一个随机数。在第二步中,我想将生成的数字发送到其他所有处理器。也就是说,在使用 MPI_Allgather 之后,每个处理器都拥有一个包含所有生成的随机数的列表:

enter image description here

#include <stdlib.h>
#include <time.h>
#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv){

int nameLen;
char processorName[MPI_MAX_PROCESSOR_NAME];

int myrank; // Rank of processor
int numprocs; // Number of processes
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Get_processor_name(processorName,&nameLen);
MPI_Status status;

time_t t;
srand((unsigned)time(NULL)+myrank*numprocs+nameLen);

long c = rand()%100;

printf("Processor %d has %li particles\n", myrank, c);

long oldcount[numprocs];

// Every processor gets the random number of the other processors
MPI_Allgather(&c, 1, MPI_LONG, &oldcount, numprocs, MPI_LONG, MPI_COMM_WORLD);

for(int i=0; i<numprocs; i++){
printf("Processor %d: %d entry of list is %li\n", myrank, i, oldcount[i]);
}

MPI_Finalize();
return 0;
}

此代码会产生错误。但为什么?我想我以正确的方式使用了 MPI_Allgather:

MPI_Allgather(
void* send_data,
int send_count,
MPI_Datatype send_datatype,
void* recv_data,
int recv_count,
MPI_Datatype recv_datatype,
MPI_Comm communicator)

最佳答案

问题出在 MPI_Allgatherrecv_count 参数上。 MPI 规范说它指定了“从任何进程接收到的元素数”。您正在给出元素总数。尝试

MPI_Allgather(&c, 1, MPI_LONG, &oldcount, 1, MPI_LONG, MPI_COMM_WORLD);

关于C: MPI_Allgather 产生错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41745413/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com