gpt4 book ai didi

CUDA 9 shfl 与 shfl_sync

转载 作者:行者123 更新时间:2023-12-04 17:07:12 29 4
gpt4 key购买 nike

从 CUDA 9 开始,不推荐使用 shfl 指令,应将其替换为 shfl_sync。

但是,当它们的行为不同时,我应该如何更换它们?

代码示例:

__global__
static void shflTest(){
int tid = threadIdx.x;
float value = tid + 0.1f;
int* ivalue = reinterpret_cast<int*>(&value);

//use the integer shfl
int ix = __shfl(ivalue[0],5,32);
int iy = __shfl_sync(ivalue[0],5,32);

float x = reinterpret_cast<float*>(&ix)[0];
float y = reinterpret_cast<float*>(&iy)[0];

if(tid == 0){
printf("shfl tmp %d %d\n",ix,iy);
printf("shfl final %f %f\n",x,y);
}
}

int main()
{
shflTest<<<1,32>>>();
cudaDeviceSynchronize();
return 0;
}

输出:
shfl tmp 1084437299 5
shfl final 5.100000 0.000000

最佳答案

如果您阅读随 CUDA 9RC 副本安装的 CUDA 9RC 编程指南(第 B.15 节),您将看到新的 __shfl_sync()函数有一个额外的 mask您没有考虑的参数:

CUDA 8:

int __shfl(int var, int srcLane, int width=warpSize);

CUDA 9:
T __shfl_sync(unsigned mask, T var, int srcLane, int width=warpSize);
^^^^^^^^^^^^^

此掩码参数的期望值也表示为:

The new *_sync shfl intrinsics take in a mask indicating the threads participating in the call. A bit, representing the thread's lane id, must be set for each participating thread to ensure they are properly converged before the intrinsic is executed by the hardware. All non-exited threads named in mask must execute the same intrinsic with the same mask, or the result is undefined.



因此,如果我们修改您的代码以符合这一点,我们会得到预期的结果:
$ cat t419.cu
#include <stdio.h>

__global__
static void shflTest(int lid){
int tid = threadIdx.x;
float value = tid + 0.1f;
int* ivalue = reinterpret_cast<int*>(&value);

//use the integer shfl
int ix = __shfl(ivalue[0],5,32);
int iy = __shfl_sync(0xFFFFFFFF, ivalue[0],5,32);

float x = reinterpret_cast<float*>(&ix)[0];
float y = reinterpret_cast<float*>(&iy)[0];

if(tid == lid){
printf("shfl tmp %d %d\n",ix,iy);
printf("shfl final %f %f\n",x,y);
}
}

int main()
{
shflTest<<<1,32>>>(0);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -arch=sm_61 -o t419 t419.cu
t419.cu(10): warning: function "__shfl(int, int, int)"
/usr/local/cuda/bin/..//include/sm_30_intrinsics.hpp(152): here was declared deprecated ("__shfl() is deprecated in favor of __shfl_sync() and may be removed in a future release (Use -Wno-deprecated-declarations to suppress this warning).")

$ cuda-memcheck ./t419
========= CUDA-MEMCHECK
shfl tmp 1084437299 1084437299
shfl final 5.100000 5.100000
========= ERROR SUMMARY: 0 errors
$

关于CUDA 9 shfl 与 shfl_sync,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46345811/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com