gpt4 book ai didi

c - 从 MPI_Irecv() 访问数据

转载 作者:行者123 更新时间:2023-11-30 17:16:38 24 4
gpt4 key购买 nike

我想知道为什么我无法通过 MPI_Recv 命令访问数据。我有一个包含 100 个元素的数组,我想将其分为 8 个进程。由于 100/8 返回不等长度的 block ,因此我手动执行此操作。然后我计算 block 并将它们单独提交给每个进程。然后,每个进程对数组的一 block 执行一个操作,比如说对其进行重新洗牌,然后返回其重新洗牌的部分,然后将其合并到原始数组中。该程序运行良好,直到我必须将从属进程的结果分组在一起。特别是我想访问从属进程刚刚返回的数组

for (i=1; i<numProcs; i++) {
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]);

// how to access chunk, take part from msgsA[i] to msgsB[i] and assign to a part of a different array??

}

整个代码

#include <mpi.h>
#include <stdio.h>
#define MAXPROCS 8 /* max number of processes */

int main(int argc, char *argv[])
{
int i, j, n=100, numProcs, myid, tag=55, msgsA[MAXPROCS], msgsB[MAXPROCS], myStart, myEnd;
double *chunk = malloc(n*sizeof(double));
double *K1 = malloc (n*sizeof(double));

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numProcs);
MPI_Comm_rank(MPI_COMM_WORLD, &myid);

if(myid==0) {
/* split the array into pieces and send the starting and finishing indices to the slave processes */
for (i=1; i<numProcs; i++) {
myStart = (n / numProcs) * i + ((n % numProcs) < i ? (n % numProcs) : i);
myEnd = myStart + (n / numProcs) + ((n % numProcs) > i) - 1;
if(myEnd>n) myEnd=n;
MPI_Isend(&myStart, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]);
MPI_Isend(&myEnd, 1, MPI_INT, i, tag+1, MPI_COMM_WORLD, &send_req[i]);
}
/* starting and finish values for the master process */
myStart = (n / numProcs) * myid + ((n % numProcs) < myid ? (n % numProcs) : myid);
myEnd = myStart + (n / numProcs) + ((n % numProcs) > myid) - 1;

for (i=1; i<numProcs; i++) {
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &recv_req[i]);

// --- access the chunk array here, take part from msgsA[i] to msgsB[i] and assign to a part of a different array

}
//calculate a function on fragments of K1 and returns void

/* Wait until all chunks have been collected */
MPI_Waitall(numProcs-1, &recv_req[1], &status[1]);
}

else {
//calculate a function on fragments of K1 and returns void

MPI_Isend (K1, n, MPI_DOUBLE, 0, tag+2, MPI_COMM_WORLD, &send_req[0]);
MPI_Wait(&send_req[0], &status[0]);
}
MPI_Finalize();
return 0;
}

最佳答案

我想我找到了解决方案。导致问题的是 MPI_Irecv()。使用非阻塞接收器我无法访问 block 变量。所以解决方案似乎只是

MPI_Status status[MAXPROCS];

for (i=1; i<numProcs; i++) {
MPI_Irecv (&msgsA[i], 1, MPI_INT, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
MPI_Irecv (&msgsB[i], 1, MPI_INT, MPI_ANY_SOURCE, tag+1, MPI_COMM_WORLD, &recv_req[i]);
MPI_Recv (chunk, n, MPI_DOUBLE, MPI_ANY_SOURCE, tag+2, MPI_COMM_WORLD, &status[i]);

//do whatever I need on chunk[j] variables
}

关于c - 从 MPI_Irecv() 访问数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29592122/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com