gpt4 book ai didi

tensorflow - 声明嵌入层时出现 ResourceExhaustedError (Keras)

转载 作者:行者123 更新时间:2023-11-30 09:17:12 25 4
gpt4 key购买 nike

我正在为 NLP 创建一个神经网络,从嵌入层开始(使用预先训练的嵌入)。但是当我在 Keras(Tensorflow 后端)中声明 Embedding 层时,出现 ResourceExhaustedError :

ResourceExhaustedError: OOM when allocating tensor with shape[137043,300] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node embedding_4/random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=9524682, _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_4/random_uniform/shape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

我已经查过Google:大多数ResourceExhaustedError发生在训练时,并且是因为GPU的RAM不够大。它可以通过减少批量大小来修复。

但就我而言,我什至没有开始训练!这行是问题所在:

q1 = Embedding(nb_words + 1, 
param['embed_dim'].value,
weights=[word_embedding_matrix],
input_length=param['sentence_max_len'].value)(question1)

这里,word_embedding_matrix 是一个大小为 (137043, 300) 的矩阵,即预训练的嵌入。

据我所知,这不会占用大量内存(与 here 不同):

137043 * 300 * 4 字节 = 53 kiB

这是使用的 GPU:

 +-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A |
| 23% 32C P8 16W / 250W | 6956MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A |
| 23% 30C P8 16W / 250W | 530MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:82:00.0 Off | N/A |
| 23% 34C P8 16W / 250W | 333MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... Off | 00000000:83:00.0 Off | N/A |
| 24% 46C P2 58W / 250W | 4090MiB / 11178MiB | 23% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1087 C uwsgi 1331MiB |
| 0 1088 C uwsgi 1331MiB |
| 0 1089 C uwsgi 1331MiB |
| 0 1090 C uwsgi 1331MiB |
| 0 1091 C uwsgi 1331MiB |
| 0 4176 C /usr/bin/python3 289MiB |
| 1 2631 C ...e92/venvs/wordintent_venv/bin/python3.6 207MiB |
| 1 4176 C /usr/bin/python3 313MiB |
| 2 4176 C /usr/bin/python3 323MiB |
| 3 4176 C /usr/bin/python3 347MiB |
| 3 10113 C python 1695MiB |
| 3 13614 C python3 1347MiB |
| 3 14116 C python 689MiB |
+-----------------------------------------------------------------------------+

有谁知道为什么我会遇到这个异常?

最佳答案

来自this link ,将 TensorFlow 配置为不直接分配最大 GPU 似乎可以解决该问题。

在声明模型层之前运行此命令修复了问题:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.3
session = tf.Session(config=config)
K.set_session(session)

在接受我的答案之前,我会花一些时间来查看其他答案。

关于tensorflow - 声明嵌入层时出现 ResourceExhaustedError (Keras),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52547568/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com