gpt4 book ai didi

c - MPI 中出现意外死锁

转载 作者:行者123 更新时间:2023-11-30 15:41:08 25 4
gpt4 key购买 nike

我希望有人能帮助我。我的代码:

void process(int myid, int numprocs)
{
int i,j, anzahl, rest;
MPI_Status stat;

meta = (int *)malloc(3 * sizeof(int));
if(myid == 0)
{
meta[0] = ASpalten;
meta[1] = AZeilen;
meta[2] = BSpalten;

for (i = 0; i < numprocs; i++) //masternode distributes matrix A to every single core
{
MPI_Send(&meta[0], 3, MPI_INT, i, TAG, MPI_COMM_WORLD);
printf("%d: debug04\n", myid);
MPI_Send(&MA[0], ASpalten*AZeilen, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD);
printf("%d: debug05\n", myid);
MPI_Send(&MB[0], ASpalten*BSpalten, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD);
printf("%d: debug06\n", myid);
}
}
else
{
MPI_Recv(meta, 3, MPI_INT, 0, TAG, MPI_COMM_WORLD, &stat);
printf("%d: debug01\n", myid);
ASpalten = meta[0];
AZeilen = meta[1];
BSpalten=meta[2];
printf("%d: debug02\n", myid);
MA = (double*)malloc(ASpalten*AZeilen*sizeof(double));
MB = (double*)malloc(ASpalten*BSpalten*sizeof(double));
MR = (double*)malloc(AZeilen*BSpalten*sizeof(double));
MPI_Recv(MA, ASpalten*AZeilen, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD, &stat);
MPI_Recv(MB, ASpalten*BSpalten, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD, &stat);
printf("%d: debug03\n", myid);
// printf("%d: %f\n", myid, *(MA + _index(1, 1, ASpalten))); //funktioniert
}

数据类型:

int ASpalten;
int AZeilen;
int BSpalten;
int *meta; //used to transfer meta data in 1 send
double *MA; //Matrix A
double *MB; //Matrix B

该程序应该使用 MPI 来乘以 2 个矩阵。我的示例矩阵证明代码可能是有效的,并且我还可以运行最多 130 * 90 矩阵(也许更多也许更少),但无论如何,随着数字的增加,我可能出现死锁:控制台打印出 "debug4" 就这样了。如果有人知道我的程序出了什么问题,我将非常感激。我已经尝试使用 MPI_INTEGER 而不是 MPI_INT,但没有区别。任何帮助,将不胜感激。使用非常小的矩阵时控制台的输出(PS,我已经尝试以不同的顺序执行我的测试用例并修改了现有的):

Testcase1 MPI:
0: debug04
0: debug05
0: debug06
0: debug04
1: debug01
1: debug02
0: debug05
1: debug03
1: debugx1
0: debug06
0: debug04......

最佳答案

看来进程 0 向 proc 0 发送消息,而 proc 0 确实收到了消息。

我修改为

  for(i=1;i<numprocs;i++)

消除死锁。

#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include "mpi.h"


int ASpalten;
int AZeilen;
int BSpalten;
int *meta; //used to transfer meta data in 1 send
double *MA; //Matrix A
double *MB; //Matrix B
double *MR; //Matrix B

void process(int myid, int numprocs){
int i,j, anzahl, rest;
int TAG=0;
MPI_Status stat;
meta=(int*)malloc(3*sizeof(int));
if(myid == 0)
{meta[0]=ASpalten;
meta[1]=AZeilen;
meta[2]=BSpalten;
for (i=1; i<numprocs; i++)//masternode distributes matrix A to every single core
{
MPI_Send(&meta[0], 3, MPI_INT, i, TAG, MPI_COMM_WORLD);
printf("%d: debug04\n", myid);
MPI_Send(&MA[0], ASpalten*AZeilen, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD);
printf("%d: debug05\n", myid);
MPI_Send(&MB[0], ASpalten*BSpalten, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD);
printf("%d: debug06\n", myid);
}
}
else
{
MPI_Recv(meta, 3, MPI_INT, 0, TAG, MPI_COMM_WORLD, &stat);
printf("%d: debug01\n", myid);
ASpalten=meta[0];
AZeilen=meta[1];
BSpalten=meta[2];
printf("%d: debug02\n", myid);
MA = (double*)malloc(ASpalten*AZeilen*sizeof(double));
MB = (double*)malloc(ASpalten*BSpalten*sizeof(double));
MR = (double*)malloc(AZeilen*BSpalten*sizeof(double));
MPI_Recv(MA, ASpalten*AZeilen, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD, &stat);
MPI_Recv(MB, ASpalten*BSpalten, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD, &stat);
printf("%d: debug03\n", myid);
// printf("%d: %f\n", myid, *(MA + _index(1, 1, ASpalten))); //funktioniert
}
}

int main(int argc,char *argv[])
{
int rank, size;


MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

ASpalten=130;
AZeilen=90;
BSpalten=200;
if(rank==0){

}

MA = (double*)malloc(ASpalten*AZeilen*sizeof(double));
MB = (double*)malloc(ASpalten*BSpalten*sizeof(double));
MR = (double*)malloc(AZeilen*BSpalten*sizeof(double));
process(rank,size);
MPI_Finalize();
return 0;
}

再见,

弗朗西斯

关于c - MPI 中出现意外死锁,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20640045/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com