gpt4 book ai didi

c - MPI_Isend() 中的 MPI 段错误

转载 作者:太空狗 更新时间:2023-10-29 15:02:45 25 4
gpt4 key购买 nike

我是 MPI 编程新手!我尝试测量点对点通信带宽 介于实用的处理器之间。但是现在我遇到了段错误!我不明白为什么会这样。我也在 ubuntu 上尝试过 valgrind,但是不知道。所以也许有人可以帮助我 :D

thanks for the fast response, but this doesn't change the problem :( I just updated the error!

这里是源代码

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]){

int myrank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

int *arraySend = (int *)malloc(25000*sizeof(int));
int *arrayRecv = (int *)malloc(25000*sizeof(int));
double startTime = 0.0, endTime = 0.0;
MPI_Status status,statusSend, statusRecv;
MPI_Request requestSend, requestRecv;

if(size != 2){
if(myrank == 0){
printf("only two processors!\n");
MPI_Finalize();
return 0;
}
}

if(myrank == 0){
startTime = MPI_Wtime();
MPI_Send(&arraySend, 25000, MPI_INT, 1, 0,MPI_COMM_WORLD);
}else{
MPI_Recv(&arrayRecv, 25000, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
}

if(myrank == 0){
endTime = MPI_Wtime();
printf("100k Bytes blocking: %f Mb/s\n", 0.1/(endTime-startTime));
startTime = MPI_Wtime();
MPI_Isend(&arraySend, 25000, MPI_INT, 1, 0, MPI_COMM_WORLD, &requestSend);
MPI_Wait(&requestSend, &statusSend);
}else{
MPI_Irecv(&arrayRecv,25000,MPI_INT,0,0,MPI_COMM_WORLD, &requestRecv);
MPI_Wait(&requestRecv, &statusRecv);
}

if(myrank == 0){
endTime = MPI_Wtime();
printf("100k Bytes non-blocking: %f Mb/s\n", 0.1/(endTime-startTime));
}
free(arraySend);
free(arrayRecv);
MPI_Finalize();
return 0;
}

此处错误已更新!

$ mpirun -np 2 nr2
[P90:05046] *** Process received signal ***
[P90:05046] Signal: Segmentation fault (11)
[P90:05046] Signal code: Address not mapped (1)
[P90:05046] Failing at address: 0x7fff54fd8000
[P90:05046] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10060) [0x7f8474777060]
[P90:05046] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x131b99) [0x7f84744f7b99]
[P90:05046] [ 2] /usr/lib/libmpi.so.0(ompi_convertor_pack+0x14d) [0x7f84749c75dd]
[P90:05046] [ 3] /usr/lib/openmpi/lib/openmpi/mca_btl_sm.so(+0x1de8) [0x7f846fe14de8]
[P90:05046] [ 4] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0xd97e) [0x7f8470c6c97e]
[P90:05046] [ 5] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8900) [0x7f8470c67900]
[P90:05046] [ 6] /usr/lib/openmpi/lib/openmpi/mca_btl_sm.so(+0x4188) [0x7f846fe17188]
[P90:05046] [ 7] /usr/lib/libopen-pal.so.0(opal_progress+0x5b) [0x7f8473f330db]
[P90:05046] [ 8] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x6fd5) [0x7f8470c65fd5]
[P90:05046] [ 9] /usr/lib/libmpi.so.0(PMPI_Send+0x195) [0x7f84749e1805]
[P90:05046] [10] nr2(main+0xe1) [0x400c55]
[P90:05046] [11] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f84743e730d]
[P90:05046] [12] nr2() [0x400ab9]
[P90:05046] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 5046 on node P90 exited on signal 11
(Segmentation fault).

最佳答案

你传递的数组的大小是错误的。

sizeof(arraySend) 应该是简单的 25000,因为 MPI 会在您定义数据类型(此处为 MPI_INT)时自动扣除大小。仅当您有位数组时,您通常才需要在代码中使用 sizeof()

尝试在栈上而不是堆上分配内存,例如:

 int *arraySend = (int *)malloc(25000*sizeof(int));

使用:

int arraySend[25000];

然后在您的 mpi 调用中使用 arraySend 而不是 &arraySend

如果您可以使用 C++,您还可以使用漂亮的 boost mpi header ,其中大小是根据传递的数据自动计算的。

关于c - MPI_Isend() 中的 MPI 段错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11090426/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com