gpt4 book ai didi

cuda - 将参数按值传递给内核函数时,参数复制到哪里?

转载 作者:行者123 更新时间:2023-12-04 03:07:50 60 4
gpt4 key购买 nike

我是 CUDA 编程的初学者,有一个问题。

当我按值传递参数时,如下所示:

__global__ void add(int a, int b, int *c) {
// some operations
}

由于变量 ab 被传递给内核函数 add 作为函数调用堆栈中的复制值,我猜一些内存空间会是需要复制进来。

如果我是对的,那是复制那些参数的额外内存空间 在 GPU 中还是在主机的主内存中?

我想知道这个问题的原因是我应该将一个大结构传递给内核函数。

我也想传递一个结构指针,但这些方式似乎需要为结构和每个成员变量调用 cudamalloc

最佳答案

非常简短的回答是,CUDA 内核的所有参数都是按值传递的,并且这些参数由主机通过 API 复制到 GPU 上的专用内存参数缓冲区中。目前,这个缓冲区存储在常量内存中,每个内核启动的参数限制为 4kb -- 参见 here .


更详细地说,PTX 标准(从技术上讲,自计算能力 2.0 硬件和 CUDA ABI 出现以来)定义了一个专用的逻辑状态空间调用 .param,它保存内核和设备参数参数。参见 here .引用该文档:

Each kernel function definition includes an optional list of parameters. These parameters are addressable, read-only variables declared in the .param state space. Values passed from the host to the kernel are accessed through these parameter variables using ld.param instructions. The kernel parameter variables are shared across all CTAs within a grid.

它进一步指出:

Note: The location of parameter space is implementation specific. For example, in some implementations kernel parameters reside in global memory. No access protection is provided between parameter and global space in this case. Similarly, function parameters are mapped to parameter passing registers and/or stack locations based on the function calling conventions of the Application Binary Interface (ABI).

因此参数状态空间的精确位置是特定于实现的。在 CUDA 硬件的第一次迭代中,它实际上映射到内核参数的共享内存和设备函数参数的寄存器。但是,由于计算 2.0 硬件和 PTX 2.2 标准,它在大多数情况下映射到内核的常量内存。文件说 following on the matter :

The constant (.const) state space is a read-only memory initialized by the host. Constant memory is accessed with a ld.const instruction. Constant memory is restricted in size, currently limited to 64 KB which can be used to hold statically-sized constant variables. There is an additional 640 KB of constant memory, organized as ten independent 64 KB regions. The driver may allocate and initialize constant buffers in these regions and pass pointers to the buffers as kernel function parameters. Since the ten regions are not contiguous, the driver must ensure that constant buffers are allocated so that each buffer fits entirely within a 64 KB region and does not span a region boundary.

Statically-sized constant variables have an optional variable initializer; constant variables with no explicit initializer are initialized to zero by default. Constant buffers allocated by the driver are initialized by the host, and pointers to such buffers are passed to the kernel as parameters.

[强调我的]

因此,虽然内核参数存储在常量内存中,但这与映射到 .const 状态空间的常量内存不同,可通过将变量定义为 __constant__ CUDA C 或 Fortran 或 Python 中的等效语言。相反,它是由驱动程序管理的内部设备内存池,程序员无法直接访问。

关于cuda - 将参数按值传递给内核函数时,参数复制到哪里?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47485401/

60 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com