gpt4 book ai didi

c++ - MPI 中的发送和接收数组

转载 作者:太空狗 更新时间:2023-10-29 22:55:42 32 4
gpt4 key购买 nike

我是 MPI 的新手,我正在编写一个简单的 MPI 程序来获取矩阵和 vector 的点积,即 A*b=c。但是,我的代码不起作用。源代码如下。

如果我将 A、b、c 和缓冲区的声明替换为

double A[16], b[4], c[4], buffer[8];

并注释那些与分配和释放操作相关的行,我的代码有效并且结果是正确的。在这种情况下,我想问题应该与指针有关,但我没有想法解决问题。

还有一点,在我的代码中,缓冲区只有 4 个元素,但缓冲区大小必须大于 8,否则不起作用。

#include<mpi.h>
#include<iostream>
#include<stdlib.h>

using namespace std;

int nx = 4, ny = 4, nxny;
int ix, iy;
double *A = nullptr, *b = nullptr, *c = nullptr, *buffer = nullptr;
double ans;

// info MPI
int myGlobalID, root = 0, numProc;
int numSent;
MPI_Status status;

// functions
void get_ixiy(int);

int main(){

MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &numProc);
MPI_Comm_rank(MPI_COMM_WORLD, &myGlobalID);

nxny = nx * ny;

A = new double(nxny);
b = new double(ny);
c = new double(nx);
buffer = new double(ny);

if(myGlobalID == root){
// init A, b
for(int k = 0; k < nxny; ++k){
get_ixiy(k);
b[iy] = 1;
A[k] = k;
}
numSent = 0;

// send b to each worker processor
MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);

// send a row of A to each worker processor, tag with row number
for(ix = 0; ix < min(numProc - 1, nx); ++ix){
for(iy = 0; iy < ny; ++iy){
buffer[iy] = A[iy + ix * ny];
}
MPI_Send(&buffer, ny, MPI_DOUBLE, ix+1, ix+1, MPI_COMM_WORLD);
numSent += 1;
}

for(ix = 0; ix < nx; ++ix){
MPI_Recv(&ans, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
int sender = status.MPI_SOURCE;
int ansType = status.MPI_TAG;
c[ansType] = ans;

// send another row to worker process
if(numSent < nx){
for(iy = 0; iy < ny; ++iy){
buffer[iy] = A[iy + numSent * ny];
}
MPI_Send(&buffer, ny, MPI_DOUBLE, sender, numSent+1,
MPI_COMM_WORLD);
numSent += 1;
}
else
MPI_Send(MPI_BOTTOM, 0, MPI_DOUBLE, sender, 0, MPI_COMM_WORLD);
}

for(ix = 0; ix < nx; ++ix){
std::cout << c[ix] << " ";
}
std::cout << std::endl;

delete [] A;
delete [] b;
delete [] c;
delete [] buffer;
}
else{
MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);
if(myGlobalID <= nx){
while(1){
MPI_Recv(&buffer, ny, MPI_DOUBLE, root, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if(status.MPI_TAG == 0) break;
int row = status.MPI_TAG - 1;
ans = 0.0;

for(iy = 0; iy < ny; ++iy) ans += buffer[iy] * b[iy];

MPI_Send(&ans, 1, MPI_DOUBLE, root, row, MPI_COMM_WORLD);
}
}
}

MPI_Finalize();
return 0;
} // main

void get_ixiy(int k){
ix = k / ny;
iy = k % ny;
}

错误信息如下。

=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 7455 RUNNING AT ***
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault:
11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

最佳答案

您的代码中有几个问题,您必须先解决。

首先,你想访问 b[] 中不存在的元素,在这个 for 循环中:

for(int k = 0; k < nxny; ++k){
get_ixiy(k);
b[k] = 1; // WARNING: this is an error
A[k] = k;
}

其次,您正在删除仅为根进程分配的内存。这会导致内存泄漏:

if(myGlobalID == root){
// ...
delete [] A;
delete [] b;
delete [] c;
delete [] buffer;
}

您必须删除为所有进程分配的内存。

第三,你有一个无用的函数 void get_ixiy(int); 改变全局变量 ix, iy。这是无用的,因为调用此函数后,您永远不会使用 ix,iy,直到您手动更改它们。看这里:

for(ix = 0; ix < min(numProc - 1, nx); ++ix){
for(iy = 0; iy < ny; ++iy){
// ...
}
}

第四,您以完全错误的方式使用了 MPI_Send()MPI_Recv()。您很幸运,没有遇到更多错误。

关于c++ - MPI 中的发送和接收数组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50641298/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com