gpt4 book ai didi

c++ - OpenCV 3.4 C++ Cuda加速比CPU耗时更多

转载 作者:行者123 更新时间:2023-11-28 04:43:56 28 4
gpt4 key购买 nike

我正在使用 CUDA 测试 OpenCV GPU 加速,但 GPU 比 CPU 慢。它只是关于中值过滤器还是我在我的代码中做错了什么?为什么 GPU 上的纯处理时间比 CPU 长?

输出:

Device 0:  "GeForce GT 330M"  1023Mb, sm_12 (not Fermi), 
48 cores, Driver/Runtime ver.6.50/6.50
Size of the Image: 512 x 512
GPU Time Includes up&download Times: 8531/100 = 85ms
GPU Time Includes only 'apply': 8307/100 = 83ms
CPU Time: 1855/100 = 18ms

代码:

void CPUvsGPU()
{
QElapsedTimer timer;
Mat cSrc;
Mat cGray;
cuda::GpuMat gGray;
cuda::printShortCudaDeviceInfo(cuda::getDevice());
cSrc = imread("baboon.jpg");
cout << "Size of the Image: " << cSrc.size << endl;

cvtColor(cSrc, cGray, COLOR_BGR2GRAY);

gGray.upload(cGray);

Mat cOut(cGray.size(), CV_8U);
cuda::GpuMat gOut(gGray.size(), CV_8U);

Ptr <cuda::Filter> mf;
mf = cuda::createMedianFilter(CV_8UC1,9);

mf->apply(gGray, gOut);//don't measure first operation's time on GPU

timer.start();
for (int i = 0; i<100 ; i++)
{
gGray.upload(cGray);
mf->apply(gGray, gOut);
gOut.download(cOut);
}
cout << "GPU Time Includes up&download Times: " << timer.elapsed() << "/100 = " << timer.elapsed()/100 <<"ms" << endl;

timer.start();
for (int i = 0; i<100 ; i++)
mf->apply(gGray, gOut);
cout << "GPU Time Includes only 'apply': " << timer.elapsed() << "/100 = " << timer.elapsed()/100 <<"ms" << endl;

timer.start();
for (int i = 0; i<100 ; i++)
medianBlur(cGray,cOut,9);
cout << "CPU Time: " << timer.elapsed() << "/100 = " << timer.elapsed()/100 <<"ms" << endl;
}

最佳答案

试着看看这个link ,您的 GPU 列在旧版 GPU 中。

也尝试查看 GPU versions of OpenCV algorithms slower than CPU versions on my machine?Why Opencv GPU code is slower than CPU?比较时要考虑的其他问题。您获得的加速度对于所有功能而言并不相同。有些得到了小的增强,有些得到了非常显着的速度。

关于c++ - OpenCV 3.4 C++ Cuda加速比CPU耗时更多,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49671853/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com