gpt4 book ai didi

MPI:如何区分MPI_Wait中的send和recv

转载 作者:行者123 更新时间:2023-12-04 19:49:51 24 4
gpt4 key购买 nike

假设我使用 PMPI 为 MPI_Wait 编写包装器,它等待 MPI 发送或接收完成。

/* ================== C Wrappers for MPI_Wait ================== */
_EXTERN_C_ int PMPI_Wait(MPI_Request *request, MPI_Status *status);
_EXTERN_C_ int MPI_Wait(MPI_Request *request, MPI_Status *status) {
int _wrap_py_return_val = 0;

_wrap_py_return_val = PMPI_Wait(request, status);
return _wrap_py_return_val;
}

包装由 this 生成.

我想做的是:

/* ================== C Wrappers for MPI_Wait ================== */
_EXTERN_C_ int PMPI_Wait(MPI_Request *request, MPI_Status *status);
_EXTERN_C_ int MPI_Wait(MPI_Request *request, MPI_Status *status) {
int _wrap_py_return_val = 0;

if(is a send request)
printf("send\n");
else // is a recv request
printf("recv\n");

_wrap_py_return_val = PMPI_Wait(request, status);
return _wrap_py_return_val;
}

如何区分Open MPI中的send和recv?假设我使用 Open MPI 3.0.0。

最佳答案

我认为因为 MPI_Request 是不透明的(我认为在几个版本中它只是一个 int)你唯一的机会是监控你自己创建的 MPI_Request.

这是一个命题(它是面向 C++ 的,因为这是我喜欢的方式):

#include <mpi.h>
#include <iostream>
#include <map>
//To do opaque ordering
struct RequestConverter
{
char data[sizeof(MPI_Request)];
RequestConverter(MPI_Request * mpi_request)
{
memcpy(data, mpi_request, sizeof(MPI_Request));
}
RequestConverter()
{ }
RequestConverter(const RequestConverter & req)
{
memcpy(data, req.data, sizeof(MPI_Request));
}
RequestConverter & operator=(const RequestConverter & req)
{
memcpy(data, req.data, sizeof(MPI_Request));
return *this;
}
bool operator<(const RequestConverter & request) const
{
for(size_t i=0; i<sizeof(MPI_Request); i++)
{
if(data[i]!=request.data[i])
{
return data[i]<request.data[i];
}
}
return false;
}
};
//To store the created MPI_Request
std::map<RequestConverter, std::string> request_holder;

extern "C"
{

int MPI_Isend(
void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request
)
{
int ier = PMPI_Isend(buf, count, datatype, dest, tag, comm, request);
request_holder[RequestConverter(request)]="sending";
return ier;
}


int MPI_Irecv(
void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request
)
{
int ier = PMPI_Irecv(buf, count, datatype, dest, tag, comm, request);
request_holder[RequestConverter(request)]="receiving";
return ier;
}

int MPI_Wait(
MPI_Request *request,
MPI_Status * status
)
{
int myid;
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
std::cout << "waiting("<<myid<<")-> "<<request_holder[RequestConverter(request)]<<std::endl;
request_holder.erase(RequestConverter(request));

return PMPI_Wait(request, status);
}


}

RequestConverter 只是一种使用 std::map

进行不经意排序的方式

MPI_Isend 将请求存储在全局映射中,MPI_IrecvMPI_Wait 也会查找请求并将其从 中删除std::map.

简单的测试给出:

int main(int argv, char ** args)
{
int myid, numprocs;
MPI_Init(&argv, &args);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &myid);

int i=123456789;
MPI_Request request;
MPI_Status status;
if(myid==0)
{
MPI_Isend(&i, 1, MPI_INT, 1, 44444, MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
std::cout << myid <<' '<<i << std::endl;
}
else if(myid==1)
{
MPI_Irecv(&i, 1, MPI_INT, 0, 44444, MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
std::cout << myid <<' '<<i << std::endl;
}
int * sb = new int[numprocs];
for(size_t i=0; i<numprocs; i++){sb[i]=(myid+1)*(i+1);}
int * rb = new int[numprocs];
MPI_Alltoall(sb, 1, MPI_INT, rb, 1, MPI_INT, MPI_COMM_WORLD );
MPI_Finalize();
}

输出:

waiting(0)-> sending
0 123456789
waiting(1)-> receiving
1 123456789

不过,我刚刚添加了一个带有 MPI_Alltoall 的测试,以查看是否仅调用了 PMPI 函数,事实确实如此。所以那里没有奇迹。

关于MPI:如何区分MPI_Wait中的send和recv,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48402349/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com