gpt4 book ai didi

c - 从一个通信器到另一个的 mpi 集体操作

转载 作者:太空狗 更新时间:2023-10-29 15:20:22 26 4
gpt4 key购买 nike

我有一个与 MPI 并行化的应用程序,它被分成许多不同的任务。每个处理器只分配一个任务,分配相同任务的处理器组分配给它自己的通信器。任务需要定期同步。目前,同步是通过 MPI_COMM_WORLD 完成的,但它的缺点是不能使用集体操作,因为不能保证其他任务会到达该代码块。

举个更具体的例子:

task1: equation1_solver, N nodes, communicator: mpi_comm_solver1
task2: equation2_solver, M nodes, communicator: mpi_comm_solver2
task3: file IO , 1 node , communicator: mpi_comm_io

我想对任务 1 上的数组进行 MPI_SUM,并将结果显示在任务 3 中。有没有一种有效的方法来做到这一点? (如果这是一个愚蠢的问题,我深表歉意,我在创建和使用自定义 MPI 通信器方面没有太多经验)

最佳答案

查尔斯完全正确;内部通信器允许您在通信器之间交谈(或者,在这种情况下区分“正常”通信器,“内部通信器”,这并没有给我带来太大的改进)。

我一直发现这些内部通信器的使用对于新手来说有点困惑。不是有意义的基本思想,而是使用(比如)MPI_Reduce 和其中之一的机制。进行归约的任务组在远程通信器上指定了根等级,到目前为止一切顺利;但在远程等级通信器中,每个人不是根指定MPI_PROC_NULL作为根,而实际根指定MPI_ROOT。为向后兼容所做的事情,嘿?

#include <mpi.h>
#include <stdio.h>


int main(int argc, char **argv)
{
int commnum = 0; /* which of the 3 comms I belong to */
MPI_Comm mycomm; /* Communicator I belong to */
MPI_Comm intercomm; /* inter-communicator */
int cw_rank, cw_size; /* size, rank in MPI_COMM_WORLD */
int rank; /* rank in local communicator */

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &cw_rank);
MPI_Comm_size(MPI_COMM_WORLD, &cw_size);

if (cw_rank == cw_size-1) /* last task is IO task */
commnum = 2;
else {
if (cw_rank < (cw_size-1)/2)
commnum = 0;
else
commnum = 1;
}

printf("Rank %d in comm %d\n", cw_rank, commnum);

/* create the local communicator, mycomm */
MPI_Comm_split(MPI_COMM_WORLD, commnum, cw_rank, &mycomm);

const int lldr_tag = 1;
const int intercomm_tag = 2;
if (commnum == 0) {
/* comm 0 needs to communicate with comm 2. */
/* create an intercommunicator: */

/* rank 0 in our new communicator will be the "local leader"
* of this commuicator for the purpose of the intercommuniator */
int local_leader = 0;

/* Now, since we're not part of the other communicator (and vice
* versa) we have to refer to the "remote leader" in terms of its
* rank in COMM_WORLD. For us, that's easy; the remote leader
* in the IO comm is defined to be cw_size-1, because that's the
* only task in that comm. But for them, it's harder. So we'll
* send that task the id of our local leader. */

/* find out which rank in COMM_WORLD is the local leader */
MPI_Comm_rank(mycomm, &rank);

if (rank == 0)
MPI_Send(&cw_rank, 1, MPI_INT, cw_size-1, 1, MPI_COMM_WORLD);
/* now create the inter-communicator */
MPI_Intercomm_create( mycomm, local_leader,
MPI_COMM_WORLD, cw_size-1,
intercomm_tag, &intercomm);
}
else if (commnum == 2)
{
/* there's only one task in this comm */
int local_leader = 0;
int rmt_ldr;
MPI_Status s;
MPI_Recv(&rmt_ldr, 1, MPI_INT, MPI_ANY_SOURCE, lldr_tag, MPI_COMM_WORLD, &s);
MPI_Intercomm_create( mycomm, local_leader,
MPI_COMM_WORLD, rmt_ldr,
intercomm_tag, &intercomm);
}


/* now let's play with our communicators and make sure they work */

if (commnum == 0) {
int max_of_ranks = 0;
/* try it internally; */
MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_MAX, 0, mycomm);
if (rank == 0) {
printf("Within comm 0: maximum of ranks is %d\n", max_of_ranks);
printf("Within comm 0: sum of ranks should be %d\n", max_of_ranks*(max_of_ranks+1)/2);
}

/* now try summing it to the other comm */
/* the "root" parameter here is the root in the remote group */
MPI_Reduce(&rank, &max_of_ranks, 1, MPI_INT, MPI_SUM, 0, intercomm);
}

if (commnum == 2) {
int sum_of_ranks = -999;
int rootproc;

/* get reduction data from other comm */

if (rank == 0) /* am I the root of this reduce? */
rootproc = MPI_ROOT;
else
rootproc = MPI_PROC_NULL;

MPI_Reduce(&rank, &sum_of_ranks, 1, MPI_INT, MPI_SUM, rootproc, intercomm);

if (rank == 0)
printf("From comm 2: sum of ranks is %d\n", sum_of_ranks);
}

if (commnum == 0 || commnum == 2);
MPI_Comm_free(&intercomm);

MPI_Finalize();
}

关于c - 从一个通信器到另一个的 mpi 集体操作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10144479/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com