gpt4 book ai didi

CUDA 内核启动参数解释对了吗?

转载 作者:太空狗 更新时间:2023-10-29 15:45:04 27 4
gpt4 key购买 nike

在这里我尝试使用一些伪代码 self 解释 CUDA 启动参数模型(或执行配置模型),但我不知道是否有一些错误,所以希望有人帮助审查它,并给我一些建议。感谢先进。

这里是:

/*
normally, we write kernel function like this.
note, __global__ means this function will be called from host codes,
and executed on device. and a __global__ function could only return void.
if there's any parameter passed into __global__ function, it should be stored
in shared memory on device. so, kernel function is so different from the *normal*
C/C++ functions. if I was the CUDA authore, I should make the kernel function more
different from a normal C function.
*/

__global__ void
kernel(float *arr_on_device, int n) {
int idx = blockIdx.x * blockDIm.x + threadIdx.x;
if (idx < n) {
arr_on_device[idx] = arr_on_device[idx] * arr_on_device[idx];
}
}

/*
after this definition, we could call this kernel function in our normal C/C++ codes !!
do you feel something wired ? un-consistant ?
normally, when I write C codes, I will think a lot about the execution process down to
the metal in my mind, and this one...it's like some fragile codes. break the sequential
thinking process in my mind.
in order to make things normal, I found a way to explain: I expand the *__global__ * function
to some pseudo codes:
*/

#define __foreach(var, start, end) for (var = start, var < end; ++var)

__device__ int
__indexing() {
const int blockId = blockIdx.x * gridDim.x + gridDim.x * gridDim.y * blockIdx.z;

return
blockId * (blockDim.x * blockDim.y * blockDim.z) +
threadIdx.z * (blockDim.x * blockDim.y) +
threadIdx.x;
}

global_config =:
{
/*
global configuration.
note the default values are all 1, so in the kernel codes,
we could just ignore those dimensions.
*/
gridDim.x = gridDim.y = gridDim.z = 1;
blockDim.x = blockDim.y = blockDim.z = 1;
};

kernel =:
{
/*
I thought CUDA did some bad evil-detail-covering things here.
it's said that CUDA C is an extension of C, but in my mind,
CUDA C is more like C++, and the *<<<>>>* part is too tricky.
for example:
kernel<<<10, 32>>>(); means kernel will execute in 10 blocks each have 32 threads.

dim3 dimG(10, 1, 1);
dim3 dimB(32, 1, 1);
kernel<<<dimG, dimB>>>(); this is exactly the same thing with above.

it's not C style, and C++ style ? at first, I thought this could be done by
C++'s constructor stuff, but I checked structure *dim3*, there's no proper
constructor for this. this just brroke the semantics of both C and C++. I thought
force user to use *kernel<<<dim3, dim3>>>* would be better. So I'd like to keep
this rule in my future codes.
*/

gridDim = dimG;
blockDim = dimB;

__foreach(blockIdx.z, 0, gridDim.z)
__foreach(blockIdx.y, 0, gridDim.y)
__foreach(blockIdx.x, 0, gridDim.x)
__foreach(threadIdx.z, 0, blockDim.z)
__foreach(threadIdx.y, 0, blockDim.y)
__foreach(threadIdx.x, 0, blockDim.x)
{
const int idx = __indexing();
if (idx < n) {
arr_on_device[idx] = arr_on_device[idx] * arr_on_device[idx];
}
}
};

/*
so, for me, gridDim & blockDim is like some boundaries.
e.g. gridDim.x is the upper bound of blockIdx.x, this is not that obvious for people like me.
*/

/* the declaration of dim3 from vector_types.h of CUDA/include */
struct __device_builtin__ dim3
{
unsigned int x, y, z;
#if defined(__cplusplus)
__host__ __device__ dim3(unsigned int vx = 1, unsigned int vy = 1, unsigned int vz = 1) : x(vx), y(vy), z(vz) {}
__host__ __device__ dim3(uint3 v) : x(v.x), y(v.y), z(v.z) {}
__host__ __device__ operator uint3(void) { uint3 t; t.x = x; t.y = y; t.z = z; return t; }
#endif /* __cplusplus */
};

typedef __device_builtin__ struct dim3 dim3;

最佳答案

CUDA 驱动程序 API

CUDA 驱动程序 API v4.0 及更高版本使用以下函数来控制内核启动:

cuFuncSetCacheConfig
cuFuncSetSharedMemConfig
cuLaunchKernel

在 v4.0 中引入 cuLaunchKernel 之前使用了以下 CUDA Driver API 函数。

cuFuncSetBlockShape()
cuFuncSetSharedSize()
cuParamSet{Size,i,fv}()
cuLaunch
cuLaunchGrid

关于这些函数的更多信息可以在 cuda.h 中找到。

CUresult CUDAAPI cuLaunchKernel(CUfunction f,
unsigned int gridDimX,
unsigned int gridDimY,
unsigned int gridDimZ,
unsigned int blockDimX,
unsigned int blockDimY,
unsigned int blockDimZ,
unsigned int sharedMemBytes,
CUstream hStream,
void **kernelParams,
void **extra);

cuLaunchKernel 将整个启动配置作为参数。

参见 NVIDIA Driver API[执行控制] 1了解更多详情。

CUDA 内核发布

cuLaunchKernel 将1.验证启动参数2.更改共享内存配置3.更改本地内存分配4. 将流同步 token 推送到命令缓冲区,以确保流中的两个命令不重叠4.将启动参数推送到命令缓冲区5.将启动命令推送到命令缓冲区6. 将命令缓冲区提交给设备(在 wddm 驱动程序上,此步骤可能会延迟)7. 在wddm上,内核驱动程序将对设备内存中所需的所有内存进行分页

GPU 将1.验证命令2. 将命令发送到计算工作分配器3. 将启动配置和线程 block 分发给 SM

当所有线程 block 都完成后,工作分配器将刷新缓存以遵循 CUDA 内存模型,并将内核标记为已完成,以便流中的下一个项目可以向前推进。

调度线程 block 的顺序因架构而异。

计算能力 1.x 设备将内核参数存储在共享内存中。计算能力 2.0-3.5 设备将 kenrel 参数存储在常量内存中。

CUDA 运行时 API

CUDA 运行时是一个 C++ 软件库,是在 CUDA Driver API 之上构建工具链。 CUDA 运行时使用以下函数来控制内核启动:

cuda配置调用cudaFuncSetCacheConfigcudaFuncSetSharedMemConfigcuda启动cudaSetup参数

参见 NVIDIA Runtime API[执行控制] 2

<<<>>> CUDA 语言扩展是用于启动内核的最常用方法。

在编译过程中,nvcc 将为使用 <<<>>> 调用的每个内核函数创建一个新的 CPU stub 函数,并将用对 stub 函数的调用替换 <<<>>>。

例如

__global__ void kernel(float* buf, int j)
{
// ...
}

kernel<<<blocks,threads,0,myStream>>>(d_buf,j);

产生

void __device_stub__Z6kernelPfi(float *__par0, int __par1){__cudaSetupArgSimple(__par0, 0U);__cudaSetupArgSimple(__par1, 4U);__cudaLaunch(((char *)((void ( *)(float *, int))kernel)));}

您可以通过将 --keep 添加到您的 nvcc 命令行来检查生成的文件。

cudaLaunch 调用 cuLaunchKernel。

CUDA 动态并行

CUDA CDP 的工作方式类似于上述 CUDA Runtime API。

关于CUDA 内核启动参数解释对了吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19240658/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com