gpt4 book ai didi

c++ - MPI_Scatter 段错误

转载 作者:行者123 更新时间:2023-11-27 23:30:58 25 4
gpt4 key购买 nike

我正在开发一个并行排序程序来学习 MPI,但我一直在使用 MPI_Scatter 时遇到问题。每次我尝试运行时,我都会得到以下信息:

reading input
Scattering input
_pmii_daemon(SIGCHLD): [NID 00012] PE 0 exit signal Segmentation fault
[NID 00012] 2011-03-28 10:12:56 Apid 23655: initiated application termination

对其他问题的基本了解并没有真正回答我遇到麻烦的原因 - 数组是连续的,所以我不应该遇到不连续的内存访问问题,并且我正在传递正确的指针正确的顺序。有人有什么想法吗?

源代码如下 - 它是为特定数字指定的,因为我还不想处理可变输入和排名大小。

#include <mpi.h>

#include <iostream>
using std::endl;

using std::cout;

#include <fstream>
using std::ifstream;
using std::ofstream;
#include <algorithm>
using std::sort;

#define SIZEOF_INPUT 10000000
#define NUMTHREADS 100
#define SIZEOF_SUBARRAY SIZEOF_INPUT/NUMTHREADS

int main(int argc, char** argv){
MPI_Init(&argc, &argv);

int input[SIZEOF_INPUT];
int tempbuf[SIZEOF_SUBARRAY];

int myRank;
MPI_Comm_rank(MPI_COMM_WORLD, &myRank);

/*
Read input from file
*/
if(myRank == 0){
cout << "reading input" << endl;
ifstream in(argv[1]);
for(int i = 0; i < SIZEOF_INPUT; ++i)
in >> input[i];
cout << "Scattering input" << endl;
}

// Scatter, Sort, and Gather again
MPI_Scatter(input,SIZEOF_INPUT,MPI_INT,tempbuf,SIZEOF_SUBARRAY,MPI_INT,0,MPI_COMM_WORLD);
cout << "Rank " << myRank << "Sorting" << endl;
sort(tempbuf,tempbuf+SIZEOF_SUBARRAY);
MPI_Gather(tempbuf,SIZEOF_SUBARRAY,MPI_INT,input,SIZEOF_INPUT,MPI_INT,0,MPI_COMM_WORLD);

if(myRank == 0){
cout << "Sorting final output" << endl;
// I'm doing a multi-queue merge here using tricky pointer games

//list of iterators representing things in the queue
int* iterators[NUMTHREADS];
//The ends of those iterators
int* ends[NUMTHREADS];

//Set up iterators and ends
for(int i = 0; i < NUMTHREADS; ++i){
iterators[i] = input + (i*SIZEOF_SUBARRAY);
ends[i] = iterators[i] + SIZEOF_SUBARRAY;
}

ofstream out(argv[2]);
int ULTRA_MAX = SIZEOF_INPUT + 1;
int* ULTRA_MAX_POINTER = &ULTRA_MAX;
while(true){
int** curr_min = &ULTRA_MAX_POINTER;
for(int i = 0 ; i < NUMTHREADS; ++i)
if(iterators[i] < ends[i] && *iterators[i] < **curr_min)
curr_min = &iterators[i];

if(curr_min == &ULTRA_MAX_POINTER) break;

out << **curr_min << endl;
++(*curr_min);
}
}

MPI_Finalize();
}

任何帮助将不胜感激。问候,扎克

最佳答案

哈!我花了一段时间才看到这个。

技巧是,在 MPI_Scatter 中,sendcount 是发送给每个进程的数量,而不是总数。与收集相同;这是从每个人那里收到的金额。也就是说,它就像带有计数的 MPI_Scatterv;计数是针对每个进程的,但在这种情况下,假定它们是相同的。

这样

MPI_Scatter(input,SIZEOF_SUBARRAY,MPI_INT,tempbuf,SIZEOF_SUBARRAY,MPI_INT,0,MPI_COMM_WORLD);
cout << "Rank " << myRank << "Sorting" << endl;
MPI_Gather(tempbuf,SIZEOF_SUBARRAY,MPI_INT,input,SIZEOF_SUBARRAY,MPI_INT,0,MPI_COMM_WORLD);

对我有用。

此外,在堆栈上分配大数组时要小心;我知道这只是一个示例问题,但对我来说这会立即导致崩溃。动态执行

int *input = new int[SIZEOF_INPUT];
int *tempbuf = new int[SIZEOF_SUBARRAY];
//....
delete [] input;
delete [] tempbuf;

解决了这个问题。

关于c++ - MPI_Scatter 段错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5462046/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com