gpt4 book ai didi

multithreading - 并行执行比顺序执行慢,即使代码是 "heavy"

转载 作者:行者123 更新时间:2023-12-01 00:09:42 25 4
gpt4 key购买 nike

有很多问题会问“为什么我的并行循环比顺序循环慢”,答案是“循环内所需的工作很少;尝试进行更多迭代”,例如 this one .

我有一个循环,每次迭代大约需要 0.5 分钟。我认为这个循环足够“重”,与线程相关的任何开销都无关紧要。事实证明,并行执行速度较慢。我的代码是 C 语言,我使用 OpenMP 进行并行化。我的代码结构如下:

int main() {
malloc and calculate group G1 of arrays;
#pragma omp parallel
{
printf("avaible threads: %i\n", omp_get_num_threads());
malloc group G2 of arrays;
#pragma omp for
for (int i = 0; i < N; i++) {
calculate G2 arrays' elements through numerical integration;
do some linear algebra with G1 and G2 to assemble the system Ax=b;
solve Ax = b;
}
}
return 0;
}

一些澄清:
  • 数组 G1 组在循环内没有被修改,它们被用作辅助变量
  • 执行迭代的顺序无关紧要
  • G1 组需要大约 0.6 GiB 的内存
  • G2 组每个线程需要大约 0.8 GiB 的内存
  • 所有线性代数都是使用 Intel MKL 完成的(带有顺序线程层,请参阅其 link advisor)

  • 对于 N=6,串行执行需要 3 分钟,而并行(3 个线程)需要 4 分钟。对于 N=30,串行为 15 分钟,并行(3 个线程)为 17 分钟。

    我试图就为什么会发生这种情况提出一个假设。也许它与 CPU 缓存和我的阵列的大小有关?

    这是我正在使用的计算机的一些信息:
    Linux 4.15.0-74-generic #84-Ubuntu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 4
    On-line CPU(s) list: 0-3
    Thread(s) per core: 1
    Core(s) per socket: 4
    Socket(s): 1
    NUMA node(s): 1
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 58
    Model name: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
    Stepping: 9
    CPU MHz: 3392.363
    CPU max MHz: 3400,0000
    CPU min MHz: 1600,0000
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 8192K
    NUMA node0 CPU(s): 0-3

    一个实际例子

    下面的代码演示了我在做什么。我试图让它尽可能简单和小巧。这取决于 Netlib 的 LAPACK 和 OpenBLAS(使用 USE_OPENMP=1 构建)。我的实际程序有数千行。

    编译:
      gcc -O2 -m64 -fopenmp test.c -lopenblas -llapack -lpthread -lm

    运行:
      export OMP_NUM_THREADS=3
    ./a.out 30
        // test.c
    #include <complex.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <math.h>
    #include <time.h>
    #include <omp.h>

    // LAPACK routines
    extern void
    zgesv_ (int *n, int *nrhs, _Complex double *a, int *lda, int* ipiv,
    _Complex double *b, int *ldb, int* info);

    extern void
    zgemv_ (char *trans, int *m, int *n, _Complex double *alpha, _Complex double *a,
    int *lda, _Complex double *x, int *incx, _Complex double *beta,
    _Complex double *y, int *incy);

    int
    main (int argc, char **argv)
    {
    srand(300);
    if (argc != 2) {
    printf("must have 1 argument: number of iterations!\n");
    exit(1);
    }
    const int nf = atoi(argv[1]);
    printf("loop will have %i iterations\n", nf);
    clock_t begin, end;
    double time_spent;
    begin = clock();
    int ne = 2296;
    int ne2 = ne * ne;
    _Complex double* restrict pot = malloc(ne2 * sizeof(_Complex double));
    for (int i = 0; i < ne; i++) {
    for (int k = i; k < ne; k++) {
    pot[i * ne + k] = (double) rand() / RAND_MAX;
    pot[i * ne + k] *= I;
    pot[i * ne + k] += (double) rand() / RAND_MAX;
    pot[k * ne + i] = pot[i * ne + k];
    }
    }
    char trans = 'N';
    _Complex double one = 1.0;
    _Complex double zero = 0.0;
    #pragma omp parallel
    {
    int n = ne;
    int ipiv[n]; //pivot indices
    int info;
    int nrhs = 1;
    #pragma omp single
    {
    printf("avaible threads: %i\n", omp_get_num_threads());
    }
    _Complex double* restrict zl = malloc(ne2 * sizeof(_Complex double));
    _Complex double* restrict ie = malloc(ne2 * sizeof(_Complex double));
    _Complex double gpr;
    #pragma omp for
    for (int i = 0; i < nf; i++) {
    printf("i = %i from thread %d\n", i, omp_get_thread_num());
    for (int m = 0; m < ne; m++) {
    for (int k = m; k < ne; k++) {
    gpr = cexp(k - m);
    zl[m * ne + k] = gpr * pot[m * ne + k];
    zl[k * ne + m] = zl[m * ne + k];
    }
    }
    ie[0] = 1.0;
    for (int m = 1; m < ne; m++) {
    ie[m] = 0.0;
    }
    zgesv_(&n, &nrhs, zl, &n, ipiv, ie, &n, &info);
    // Check for the exact singularity
    if (info > 0) {
    printf("The diagonal element of the triangular factor of ZL,\n");
    printf("U(%i,%i) is zero, so that ZL is singular;\n", info, info);
    printf("the solution could not be computed.\n");
    }
    zgemv_(&trans, &ne, &ne, &one, zl, &ne, ie, &nrhs, &zero, ie, &nrhs);
    for (int p = 0; p < ne2; p++) {
    gpr = 0.0;
    for (int m = 0; m < ne; m++) {
    gpr += ie[m] * cexp(-m * 5.4) / 4.1;
    }
    }
    }
    free(zl);
    free(ie);
    }
    free(pot);
    end = clock();
    time_spent = (double) (end - begin) / CLOCKS_PER_SEC;
    printf("Success. Elapsed time: %.2f minutes\n", time_spent / 60.0);
    return 0;
    }

    最佳答案

    我有个教授说

    If the experimental results did not agree with the theoretical ones, a question you must ask is How are you measuring?



    正如@Gilles 在对问题的评论中指出的那样:使用 omp_get_wtime()而不是 clock() .那是因为 clock()返回 所有线程的累积 CPU 时间 , 而 omp_get_wtime()返回挂钟时间(见 OpenMP Forums)。

    关于multithreading - 并行执行比顺序执行慢,即使代码是 "heavy",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59866864/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com