gpt4 book ai didi

cuda - 在主机和 GPU 上添加 CUDA 的不同结果

转载 作者:行者123 更新时间:2023-12-05 00:31:39 25 4
gpt4 key购买 nike

我有一个函数,它拍摄彩色图片并返回它的灰色版本。
如果我在主机上运行顺序代码,一切正常。如果我在设备上运行它,结果会略有不同(与正确值相比,1000 中的一个像素是 +1 或 -1)。

我认为这与转换有关,但我不确定。这是我使用的代码:

    __global__ void rgb2gray_d (unsigned char *deviceImage, unsigned char *deviceResult, const int height, const int width){
/* calculate the global thread id*/
int threadsPerBlock = blockDim.x * blockDim.y;
int threadNumInBlock = threadIdx.x + blockDim.x * threadIdx.y;
int blockNumInGrid = blockIdx.x + gridDim.x * blockIdx.y;

int globalThreadNum = blockNumInGrid * threadsPerBlock + threadNumInBlock;
int i = globalThreadNum;

float grayPix = 0.0f;
float r = static_cast< float >(deviceImage[i]);
float g = static_cast< float >(deviceImage[(width * height) + i]);
float b = static_cast< float >(deviceImage[(2 * width * height) + i]);
grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);

deviceResult[i] = static_cast< unsigned char > (grayPix);
}

void rgb2gray(unsigned char *inputImage, unsigned char *grayImage, const int width, const int height, NSTimer &timer) {

unsigned char *deviceImage;
unsigned char *deviceResult;

int initialBytes = width * height * 3;
int endBytes = width * height * sizeof(unsigned char);

unsigned char grayImageSeq[endBytes];

cudaMalloc((void**) &deviceImage, initialBytes);
cudaMalloc((void**) &deviceResult, endBytes);
cudaMemset(deviceResult, 0, endBytes);
cudaMemset(deviceImage, 0, initialBytes);

cudaError_t err = cudaMemcpy(deviceImage, inputImage, initialBytes, cudaMemcpyHostToDevice);

// Convert the input image to grayscale
rgb2gray_d<<<width * height / 256, 256>>>(deviceImage, deviceResult, height, width);
cudaDeviceSynchronize();

cudaMemcpy(grayImage, deviceResult, endBytes, cudaMemcpyDeviceToHost);

////// Sequential
for ( int y = 0; y < height; y++ ) {
for ( int x = 0; x < width; x++ ) {
float grayPix = 0.0f;
float r = static_cast< float >(inputImage[(y * width) + x]);
float g = static_cast< float >(inputImage[(width * height) + (y * width) + x]);
float b = static_cast< float >(inputImage[(2 * width * height) + (y * width) + x]);

grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);
grayImageSeq[(y * width) + x] = static_cast< unsigned char > (grayPix);
}
}

//compare sequential and cuda and print pixels that are wrong
for (int i = 0; i < endBytes; i++)
{
if (grayImage[i] != grayImageSeq[i])
cout << i << "-" << static_cast< unsigned int >(grayImage[i]) <<
" should be " << static_cast< unsigned int >(grayImageSeq[i]) << endl;
}

cudaFree(deviceImage);
cudaFree(deviceResult);
}

我提到我为初始图像分配了宽度 * 高度 * 3,因为初始图像是一个 CImg。

我在 GeForce GTX 480 上工作。

最佳答案

最后我找到了答案。 CUDA 会自动融合单精度和 double 的乘加。使用下面的文档 1 , 第 4.4 节,我设法修复了它。而不是做

grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);

我现在正在做
grayPix = __fadd_rn(__fadd_rn(__fmul_rn(0.3f, r),__fmul_rn(0.59f, g)), __fmul_rn(0.11f, b));

这将禁用乘法合并并添加到融合乘加指令中。

Floating Point and IEEE 754 Compliance for NVIDIA GPUs

关于cuda - 在主机和 GPU 上添加 CUDA 的不同结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14406364/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com