gpt4 book ai didi

c++ - 使用 CUDA 线程索引作为数字

转载 作者:行者123 更新时间:2023-11-28 01:52:33 29 4
gpt4 key购买 nike

我是 CUDA 和 GPGPU 的新手。我正在尝试检查大量数字(大于 32 位)的属性,我想尝试使用配备 nVidia GTX 1080 的 Windows 7 64 位机器来执行此操作:

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1080"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8192 MBytes (8589934592 bytes)
(20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1734 MHz (1.73 GHz)
Memory Clock rate: 5005 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

当我运行以下代码时,“sum”的值是无意义的(28、20 等),尽管我可以看到 threadId 从 0 变为 4095:

#include <cuda.h>
#include <cuda_runtime.h>
#include "device_launch_parameters.h"
#include "stdio.h"

__global__ void Simple(unsigned long long int *sum)
{
unsigned long long int blockId = blockIdx.x + blockIdx.y * gridDim.x + gridDim.x * gridDim.y * blockIdx.z;

unsigned long long int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z)
+ (threadIdx.z * (blockDim.x * blockDim.y))
+ (threadIdx.y * blockDim.x)
+ threadIdx.x;

printf("threadId = %llu.\n", threadId);
// Check threadId for property. Possibly introduce a grid stride for loop to give each thread a range to check.
sum[0]++;
}

int main(int argc, char **argv)
{
unsigned long long int sum[] = { 0 };

unsigned long long int *dev_sum;

cudaMalloc((void**)&dev_sum, sizeof(unsigned long long int));
cudaMemcpy(dev_sum, sum, sizeof(unsigned long long int), cudaMemcpyHostToDevice);

dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);

printf("--------- Start kernel ---------\n\n");
Simple <<< grid, block >>> (dev_sum);
cudaDeviceSynchronize();

cudaMemcpy(sum, dev_sum, sizeof(unsigned long long int), cudaMemcpyDeviceToHost);

printf("sum = %llu.\n", sum[0]);

cudaFree(dev_sum);

getchar();

return 0;
}

我将如何修改此内核调用以通过添加网格步幅循环获得最大线程数(使用我的设置)在 0 到 10^12 的数字范围内运行?

dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);

Simple <<< grid, block >>> (dev_sum);

最佳答案

所有线程都在内存中的同一个地方递增,这导致了竞争条件。这就是结果不正确的原因。您应该使用原子加法使其正确(CUDA 中有一个函数)。

关于c++ - 使用 CUDA 线程索引作为数字,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42411898/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com