gpt4 book ai didi

c - 使用 MPI_Send 和 MPI_Recv 实现 MPI_Scatter 的问题

转载 作者:太空宇宙 更新时间:2023-11-04 02:24:59 25 4
gpt4 key购买 nike

我正在尝试使用 MPI_Send 和 MPI_Recv 实现 MPI 函数 MPI_Scatter。

我想使用函数的官方声明,它使用 vector/数组的指针。

MPI_Scatter(
void* send_data,
int send_count,
MPI_Datatype send_datatype,
void* recv_data,
int recv_count,
MPI_Datatype recv_datatype,
int root,
MPI_Comm communicator)

我创建了一个非常适合 MPI MPI_Scatter 的示例。它显示了正确的结果。

我必须使用函数来实现这些东西,一个用指针,一个用静态整数数组。第二个工作正常,但第一个只显示创建的数组的前三个元素。我认为这是与矩阵的分配内存相关的问题,但我看不到修复它的方法。

这是代码(MMPI_Scatter 让我出错):

#include <stdio.h>
#include <mpi.h>
#include <stdlib.h>
#include <math.h>
#include <unistd.h>


#define ROOT 0
#define N 3

int main(int argc, char **argv) {

// for storing this process' rank, and the number of processes
int rank, np;
int *matrix;

//MPI_Scatter
int send_count, recv_count;
int *recv_data;

MPI_Status status, info;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&np);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);


if (rank == ROOT) {
matrix = createMatrix(np, np);
printArray(matrix, np * np);

}

send_count = np;
recv_count = np;
recv_data = malloc(recv_count * sizeof(int));


//The original function provided by MPI works great!!
MPI_Scatter(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);

//This function just show the first three elements of the matrix
//MMPI_Scatter(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);

//This function works great, but it not use the official declaration of the MPI_Scatter
//MMPI_Scatter2(matrix, send_count, MPI_INT, recv_data, recv_count, MPI_INT, ROOT, MPI_COMM_WORLD);
printArray(recv_data , recv_count);


MPI_Finalize();
return 0;
}


//http://mpitutorial.com/tutorials/mpi-scatter-gather-and-allgather/
void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype,
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {

int np, rank;
int i;
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);

printArray(send_data, np * np);

if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + (i * send_count), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
printArray(send_data, np * np);

}

//Works great, but without pointer
void MMPI_Scatter2(int send_data[], int send_count, MPI_Datatype send_datatype,
int recv_data[], int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {

int np, rank;
int i;
MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);



if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + (i * send_count), send_count, send_datatype, i, 0, communicator);
}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);
printArray(recv_data, np);
}


int *createMatrix(int nRows, int nCols) {

int *matrix;

int h, i, j;

if ((matrix = malloc(nRows * nCols * sizeof(int))) == NULL) {
printf("Malloc error:");
exit(1);
}

//Test values
for (h = 0; h < nRows * nCols; h++) {
matrix[h] = h + 1;
}

return matrix;
}

更新 1:

我认为它与此链接中的信息有关: https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node71.html#Node71

有一行:

MPI_Send(sendbuf + i*sendcount*extent(sendtype), sendcount, sendtype, i.....)

但是我不知道如何处理extend(sendtype)

更新 2:

现在它可以工作了,但目前因为我自己知道数据类型

void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype, 
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {

int np, rank;
int i;
int size;

MPI_Datatype type;
type = MPI_INT;
MPI_Type_size(type, &size);

MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);



if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + ((i * send_count) * size), send_count, send_datatype, i, 0, communicator);
}
}

MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);

}

更新 3(已解决):

void MMPI_Scatter(void* send_data, int send_count, MPI_Datatype send_datatype, 
void* recv_data, int recv_count, MPI_Datatype recv_datatype,
int root, MPI_Comm communicator) {

int np, rank;
int i;
int size;

MPI_Datatype type;
type = send_datatype;
MPI_Type_size(type, &size);

MPI_Status status;
MPI_Comm_size(communicator, &np);
MPI_Comm_rank(communicator, &rank);



if (rank == ROOT) {
for (i = 0; i < np; i++) {
MPI_Send(send_data + ((i * send_count) * size), send_count, send_datatype, i, 0, communicator);

}
}
MPI_Recv(recv_data, recv_count, recv_datatype, root, 0, communicator, &status);

}

更新 4

这个函数工作正常,因为使用了 ROOT,但是一旦从集体调用,ROOT 必须被 root 替换,如下所示:

if (rank == root) {
}

最佳答案

send_data + (i * send_count) 更改为:

send_data + (i * send_count + MPI_Type_size(send_datatype)

关于c - 使用 MPI_Send 和 MPI_Recv 实现 MPI_Scatter 的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52797233/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com