gpt4 book ai didi

c++ - 为什么我的数据不适合 CUDA 纹理对象?

转载 作者:行者123 更新时间:2023-11-28 04:26:01 24 4
gpt4 key购买 nike

我试图用一些数据填充 CUDA 纹理对象,但对 cudaCreateTextureObject 的调用失败并出现以下错误(编辑:在 GTX 上1080TIRTX 2080TI):

GPU ERROR! 'invalid argument' (err code 11)

如果我将较少的数据放入我的纹理中,它会起作用,所以我猜测我对我可以将多少数据放入纹理中的计算是错误的。

我的思考过程如下:(可执行代码如下)

我的数据以 (76,76) 幅图像的形式出现,其中每个像素都是一个 float 。我想做的是将一列图像存储在纹理对象中;据我了解,cudaMallocPitch 是执行此操作的方法。

在计算我可以在一个纹理中存储的图像数量时,我使用以下公式来确定单个图像需要多少空间:

GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float)

第一个参数应该是 GTX 1080TI 卡上的内存间距(512 字节)。我可以在一维纹理中存储的字节数为 2^27 here .当我将后者除以前者时,我得到 862.3,假设这是我可以存储在一个纹理对象中的图像数量。但是,当我尝试在我的缓冲区中存储超过 855 张图像时,程序崩溃并出现上述错误。

代码如下:

在下面的主函数中,(a) 设置所有相关参数,(b) 使用 cudaMallocPitch 分配内存,以及(c) 配置并创建一个 CUDA 纹理对象:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <cassert>

#define GTX_1080TI_MEM_PITCH 512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27

//=====================================================================[ util ]

// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit( EXIT_FAILURE );
}
}

// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
cudaError cu_err = cudaGetLastError();
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit(EXIT_FAILURE);
}
}

//=====================================================================[ main ]

int main(){

// setup
unsigned int img_dim_x = 76;
unsigned int img_dim_y = 76;
unsigned int img_num = 856; // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?

unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
unsigned int img_num_per_tex = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;

fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );

// allocate pitched memory
size_t img_tex_pitch;
float* d_img_tex_data;

CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );

assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n",
img_num*img_dim_x*img_dim_y*sizeof(float),
img_num*img_tex_pitch*img_dim_y*sizeof(float),
img_tex_pitch,
GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
GTX_1080TI_1DTEX_WIDTH );

// generic resource descriptor
cudaResourceDesc res_desc;
memset(&res_desc, 0, sizeof(res_desc));
res_desc.resType = cudaResourceTypePitch2D;
res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
res_desc.res.pitch2D.devPtr = d_img_tex_data;
res_desc.res.pitch2D.width = img_dim_x;
res_desc.res.pitch2D.height = img_dim_y*img_num;
res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;

// texture descriptor
cudaTextureDesc tex_desc;
memset(&tex_desc, 0, sizeof(tex_desc));
tex_desc.addressMode[0] = cudaAddressModeClamp;
tex_desc.addressMode[1] = cudaAddressModeClamp;
tex_desc.filterMode = cudaFilterModeLinear; // for linear interpolation (NOTE: this breaks normal integer indexing!)
tex_desc.readMode = cudaReadModeElementType;
tex_desc.normalizedCoords = false; // we want to index using [0;img_dim] rather than [0;1]

// make sure there are no lingering errors
GPU_ERR_CHK();
fprintf(stderr, "No CUDA error until now..\n");

// create texture object
cudaTextureObject_t img_tex_obj;
CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );

fprintf(stderr, "bluppi\n");
}

这应该在 cudaCreateTextureObject 被调用时崩溃。但是,如果 img_num 参数(位于 main 的开头)从 856 更改为 855,但是,代码应该执行成功。 (编辑: 预期的行为是代码以 862 的值运行,但以 863 的值运行失败,因为这实际上需要比记录的缓冲区大小提供的更多字节。)

如有任何帮助,我们将不胜感激!

最佳答案

由于您在这里使用的是2D 纹理,因此您可以在1D 纹理 中存储的字节数(“宽度”)与此处无关.

2D 纹理可能具有不同的特性,具体取决于为纹理提供支持的内存类型。两个例子是线性存储器和 CUDA 数组。您已选择使用线性内存支持(由 cudaMalloc* 操作提供,而不是 cudaMallocArray)。

您遇到的主要问题是最大纹理高度。要了解这是什么,我们可以引用 table 14在编程指南中,列出了:

绑定(bind)到线性内存 65000 x 65000 的 2D 纹理引用的最大宽度和高度

对于 76 行的图像高度,当图像从 855 增加到 856 时,您超过了这个 65000 的数字。 856*76 = 65056, 855*76 = 64980

“但是等等”你说,表 14 条目说纹理引用,而我正在使用纹理对象

你是对的,表 14 没有明确列出纹理对象的相应限制。在这种情况下,我们必须使用 cudaGetDeviceProperties() 在运行时引用可从设备读取的设备属性。如果我们查看可用数据 there ,我们看到这个可读项目:

maxTexture2DLinear[3] contains the maximum 2D texture dimensions for 2D textures bound to pitch linear memory.

(我怀疑 3 是错字,但没关系,我们只需要前 2 个值)。

这是我们想要确定的值。如果我们修改您的代码以遵守该限制,则没有问题:

$ cat t382.cu
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <cassert>

#define GTX_1080TI_MEM_PITCH 512
#define GTX_1080TI_1DTEX_WIDTH 134217728 // 2^27

//=====================================================================[ util ]

// CUDA error checking for library functions
#define CUDA_ERR_CHK(func){ cuda_assert( (func), __FILE__, __LINE__ ); }
inline void cuda_assert( const cudaError_t cu_err, const char* file, int line ){
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit( EXIT_FAILURE );
}
}

// CUDA generic error checking (used after kernel calls)
#define GPU_ERR_CHK(){ gpu_assert(__FILE__, __LINE__); }
inline void gpu_assert( const char* file, const int line ){
cudaError cu_err = cudaGetLastError();
if( cu_err != cudaSuccess ){
fprintf( stderr, "\nGPU KERNEL ERROR! \'%s\' (err code %d) in file %s, line %d.\n\n", cudaGetErrorString(cu_err), cu_err, file, line );
exit(EXIT_FAILURE);
}
}

//=====================================================================[ main ]

int main(){

cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, 0);
size_t max2Dtexturelinearwidth = prop.maxTexture2DLinear[0]; // texture x dimension
size_t max2Dtexturelinearheight = prop.maxTexture2DLinear[1]; // texture y dimension
fprintf( stderr, "maximum 2D linear texture dimensions (width,height): %lu,%lu\n", max2Dtexturelinearwidth, max2Dtexturelinearheight);



// setup
unsigned int img_dim_x = 76;
unsigned int img_dim_y = 76;
//unsigned int img_num = 856; // <-- NOTE: set this to 855 and it should work - but we should be able to put 862 here?
unsigned int img_num = max2Dtexturelinearheight/img_dim_y;
fprintf( stderr, "maximum number of images per texture: %u\n", img_num);

unsigned int pitched_img_size = GTX_1080TI_MEM_PITCH * img_dim_y * sizeof(float);
unsigned int img_num_per_tex = GTX_1080TI_1DTEX_WIDTH / pitched_img_size;

fprintf( stderr, "We should be able to stuff %d images into one texture.\n", img_num_per_tex );
fprintf( stderr, "We use %d (more than 855 leads to a crash).\n", img_num );

// allocate pitched memory
size_t img_tex_pitch;
float* d_img_tex_data;

CUDA_ERR_CHK( cudaMallocPitch( &d_img_tex_data, &img_tex_pitch, img_dim_x*sizeof(float), img_dim_y*img_num ) );

assert( img_tex_pitch == GTX_1080TI_MEM_PITCH );
fprintf( stderr, "Asking for %zd bytes allocates %zd bytes using pitch %zd. Available: %zd/%d\n",
img_num*img_dim_x*img_dim_y*sizeof(float),
img_num*img_tex_pitch*img_dim_y*sizeof(float),
img_tex_pitch,
GTX_1080TI_1DTEX_WIDTH - img_num*img_tex_pitch*img_dim_y*sizeof(float),
GTX_1080TI_1DTEX_WIDTH );

// generic resource descriptor
cudaResourceDesc res_desc;
memset(&res_desc, 0, sizeof(res_desc));
res_desc.resType = cudaResourceTypePitch2D;
res_desc.res.pitch2D.desc = cudaCreateChannelDesc<float>();
res_desc.res.pitch2D.devPtr = d_img_tex_data;
res_desc.res.pitch2D.width = img_dim_x;
res_desc.res.pitch2D.height = img_dim_y*img_num;
res_desc.res.pitch2D.pitchInBytes = img_tex_pitch;

// texture descriptor
cudaTextureDesc tex_desc;
memset(&tex_desc, 0, sizeof(tex_desc));
tex_desc.addressMode[0] = cudaAddressModeClamp;
tex_desc.addressMode[1] = cudaAddressModeClamp;
tex_desc.filterMode = cudaFilterModeLinear; // for linear interpolation (NOTE: this breaks normal integer indexing!)
tex_desc.readMode = cudaReadModeElementType;
tex_desc.normalizedCoords = false; // we want to index using [0;img_dim] rather than [0;1]

// make sure there are no lingering errors
GPU_ERR_CHK();
fprintf(stderr, "No CUDA error until now..\n");

// create texture object
cudaTextureObject_t img_tex_obj;
CUDA_ERR_CHK( cudaCreateTextureObject(&img_tex_obj, &res_desc, &tex_desc, NULL) );

fprintf(stderr, "bluppi\n");
}
$ nvcc -o t382 t382.cu
$ cuda-memcheck ./t382
========= CUDA-MEMCHECK
maximum 2D linear texture dimensions (width,height): 131072,65000
maximum number of images per texture: 855
We should be able to stuff 862 images into one texture.
We use 855 (more than 855 leads to a crash).
Asking for 19753920 bytes allocates 133079040 bytes using pitch 512. Available: 1138688/134217728
No CUDA error until now..
bluppi
========= ERROR SUMMARY: 0 errors
$

关于c++ - 为什么我的数据不适合 CUDA 纹理对象?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54310320/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com