gpt4 book ai didi

c++ - 从 nvprof 输出计算内存带宽的奇怪结果

转载 作者:太空宇宙 更新时间:2023-11-04 12:57:06 26 4
gpt4 key购买 nike

如何计算给定的 gpu 显存带宽:

  1. 数据样本大小(以 Gb 为单位)。
  2. 内核执行时间(nvprof 输出)。

GPU:gtx 1050 ti
库达:8.0
操作系统:Windows 10
IDE:Visual Studio 2015

通常我会使用这个公式:带宽 [Gb/s] = data_size [Gb]/average_time [s]

但是当我对 get_mem_kernel() 内核使用上面的等式时,我得到了错误的结果:441,93 [Gb/s]

我认为这个结果是错误的,因为在 gtx 1050 ti 的技术规范中,全局内存带宽是 112 [Gb\s]

我是哪里弄错了还是有什么不明白的?

示例代码:

// cpp libs:
#include <iostream>
#include <sstream>
#include <fstream>
#include <iomanip>
#include <stdexcept>

// cuda libs:
#include <cuda_runtime.h>
#include <device_launch_parameters.h>

#define ERROR_CHECK(CHECK_) if (CHECK_ != cudaError_t::cudaSuccess) { std::cout << "cuda error" << std::endl; throw std::runtime_error("cuda error"); }

using data_type = double;

template <typename T> constexpr __forceinline__
T div_s(T dividend, T divisor)
{
using P = double;
return static_cast <T> (static_cast <P> (dividend + divisor - 1) / static_cast <P> (divisor));
}

__global__
void set_mem_kernel(const unsigned int size, data_type * const in_data)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < size)
{
in_data[idx] = static_cast <data_type> (idx);
}
}

__global__
void get_mem_kernel(const unsigned int size, data_type * const in_data)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
data_type val = 0;
if (idx < size)
{
val = in_data[idx];
}
}

struct quit_program
{
public:
~quit_program()
{
try
{
ERROR_CHECK(cudaDeviceReset());
}
catch (...) {}
}
} quit;

int main()
{
unsigned int size = 12500000; // 100 mb;
size_t byte = size * sizeof(data_type);

dim3 threads (256, 1, 1);
dim3 blocks (div_s(size, threads.x), 1, 1);

std::cout << size << std::endl;
std::cout << byte << std::endl;
std::cout << std::endl;

std::cout << threads.x << std::endl;
std::cout << blocks.x << std::endl;
std::cout << std::endl;

// data:
data_type * d_data = nullptr;
ERROR_CHECK(cudaMalloc(&d_data, byte));

for (int i = 0; i < 20000; i++)
{
set_mem_kernel <<<blocks, threads>>> (size, d_data);
ERROR_CHECK(cudaDeviceSynchronize());
ERROR_CHECK(cudaGetLastError());

get_mem_kernel <<<blocks, threads>>> (size, d_data);
ERROR_CHECK(cudaDeviceSynchronize());
ERROR_CHECK(cudaGetLastError());
}

// Exit:
ERROR_CHECK(cudaFree(d_data));
ERROR_CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}

nvproof 结果:

D:\Dev\visual_studio\nevada_test_site\x64\Release>nvprof ./cuda_test.exe
12500000
100000000

256
48829

==10508== NVPROF is profiling process 10508, command: ./cuda_test.exe
==10508== Warning: Unified Memory Profiling is not supported on the current configuration because a pair of devices without peer-to-peer support is detected on this multi-GPU setup. When peer mappings are not available, system falls back to using zero-copy memory. It can cause kernels, which access unified memory, to run slower. More details can be found at: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-managed-memory
==10508== Profiling application: ./cuda_test.exe
==10508== Profiling result:
Time(%) Time Calls Avg Min Max Name
81.12% 19.4508s 20000 972.54us 971.22us 978.32us set_mem_kernel(unsigned int, double*)
18.88% 4.52568s 20000 226.28us 224.45us 271.14us get_mem_kernel(unsigned int, double*)

==10508== API calls:
Time(%) Time Calls Avg Min Max Name
97.53% 26.8907s 40000 672.27us 247.98us 1.7566ms cudaDeviceSynchronize
1.61% 443.32ms 40000 11.082us 5.8340us 183.43us cudaLaunch
0.51% 141.10ms 1 141.10ms 141.10ms 141.10ms cudaMalloc
0.16% 43.648ms 1 43.648ms 43.648ms 43.648ms cudaDeviceReset
0.08% 22.182ms 80000 277ns 0ns 121.07us cudaSetupArgument
0.06% 15.437ms 40000 385ns 0ns 24.433us cudaGetLastError
0.05% 12.929ms 40000 323ns 0ns 57.253us cudaConfigureCall
0.00% 1.1932ms 91 13.112us 0ns 734.09us cuDeviceGetAttribute
0.00% 762.17us 1 762.17us 762.17us 762.17us cudaFree
0.00% 359.93us 1 359.93us 359.93us 359.93us cuDeviceGetName
0.00% 8.3880us 1 8.3880us 8.3880us 8.3880us cuDeviceTotalMem
0.00% 2.5520us 3 850ns 364ns 1.8230us cuDeviceGetCount
0.00% 1.8240us 3 608ns 365ns 1.0940us cuDeviceGet

CUDA Samples\v8.0\1_Utilities\bandwidthTest 结果:

[CUDA Bandwidth Test] - Starting...
Running on...

Device 0: GeForce GTX 1050 Ti
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11038.4

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11469.6

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 95214.0

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

最佳答案

编译器正在优化内存读取。 Robert Crovella指出.感谢您的帮助 - 我永远猜不到。

详细:
我的编译器正在优化 val 变量并通过扩展内存读取。

关于c++ - 从 nvprof 输出计算内存带宽的奇怪结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46064030/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com