gpt4 book ai didi

python - 如何利用 Tensorflow 100% 的 GPU 内存?

转载 作者:太空狗 更新时间:2023-10-30 02:51:09 25 4
gpt4 key购买 nike

我有一个 32Gb 显卡,在我的脚本开始时我看到:

2019-07-11 01:26:19.985367: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 95.16G (102174818304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.988090: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 85.64G (91957338112 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.990806: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 77.08G (82761605120 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.993527: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 69.37G (74485440512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.996219: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 62.43G (67036893184 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.998911: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 56.19G (60333203456 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.001601: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 50.57G (54299881472 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.004296: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 45.51G (48869892096 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.006981: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 40.96G (43982901248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.009660: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 36.87G (39584608256 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.012341: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 33.18G (35626147840 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

之后 TF 使用了我 96% 的内存。后来,当它用完内存时,它会尝试分配 65G

tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 65.30G (70111285248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

我的问题是,剩下的 1300MB (0.04*32480) 怎么办?我不介意在运行 OOM 之前使用它们。

如何让 TF 使用 99.9% 的 内存 而不是 96%?

更新: nvidia-smi 输出

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04 Driver Version: 418.40.04 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:16.0 Off | 0 |
| N/A 66C P0 293W / 300W | 31274MiB / 32480MiB | 100% Default |

我问的是这 1205MB (31274MiB - 32480MiB) 剩余未使用的空间。也许它们在那里是有原因的,也许它们是在 OOM 之前使用的。

最佳答案

监控GPU并不像监控CPU那么简单。正在进行的许多并行进程可能会为您的 GPU 造成瓶颈

可能会出现各种问题,例如:
1. 数据读写速度
2.CPU或磁盘造成瓶颈

不过我觉得用96%还是挺正常的。更不用说 nvidia-smi 仅针对一个特定实例显示。

您可以安装 gpustat 并使用它实时监控 GPU(您应该在 OOM 期间达到 100%)

pip install gpustat

gpustat -i

你能做什么?
1.可以用data_iterator更快地并行处理数据。
2. 增加批量。 (我不认为这对你的情况有用,因为你正在点击 OOM)
3.你可以超频GPU(不推荐)

Here是一篇关于硬件加速的好文章。

关于python - 如何利用 Tensorflow 100% 的 GPU 内存?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56994738/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com