gpt4 book ai didi

python - Tensorflow 2.0 意外 OOM

转载 作者:行者123 更新时间:2023-12-02 00:46:38 30 4
gpt4 key购买 nike

我正在尝试使用tensorflow.keras 训练我的模型,但由于 OOM,它在经过一些时期后失败。 Tensorflow 2.0 已将许多内容标记为已弃用,我不知道应该如何诊断问题。

该网络由一系列 Conv1D 层和一些从一个序列转换为另一个序列的自注意力层组成。序列的长度是可变的,但序列长度和失败时间之间没有相关性。 IE:它可以很好地处理 6 分钟的序列,但无法处理 4 分钟的序列。

with tensorflow.device('/device:gpu:0'):
m2t = BuildGenerator() #builds and returns model
m2t.compile(optimizer='adam', loss='mse')
for epoch in range(1):
for inout in InputGenerator(params):
m2t.train_on_batch(inout[0], inout[1])

我尝试过的事情:

  1. 删除自注意力层。还是失败了
  2. 删除除少量图层之外的所有图层。还是失败了
  3. 将所有序列填充到恒定长度。还是失败了
  4. 使用 m2t.predict(inout[0]) 而不是 train_on_batch。它失败了,但需要更长的时间。
  5. 使用tensorflow.summary.trace_export。它记录了一些东西,但它不会在chrome中加载,例如页面HERE建议。
  6. 我查看了THIS答案,但随着 TF-2.0 的变化,我不知道如何正确地做到这一点。

没有其他对tensorflow或keras的调用。

编辑:根据要求,示例错误日志。每次的错误都略有不同。

其中一些,其间有一些成功的运行。

W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

然后就从这个开始,还有一个巨大的列表,其中包含“# chunks of size ...”和“InUse...”

W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 43.26MiB (rounded to 45360128).  Current allocation summary follows.
I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (256): Total Chunks: 79, Chunks in use: 79. 19.8KiB allocated for chunks. 19.8KiB in use in bin. 2.2KiB client-requested in use in bin.
...
I tensorflow/core/common_runtime/bfc_allocator.cc:921] Sum Total of in-use chunks: 8.40GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:923] total_region_allocated_bytes_: 9109728768 memory_limit_: 9109728789 available bytes: 21 curr_region_allocation_bytes_: 17179869184
I tensorflow/core/common_runtime/bfc_allocator.cc:929] Stats:
Limit: 9109728789
InUse: 9024084224
MaxInUse: 9024084224
NumAllocs: 38387
MaxAllocSize: 1452673536

W tensorflow/core/common_runtime/bfc_allocator.cc:424]


W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1,45000,12,21] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File ".\TrainGNet.py", line 380, in <module>
m2t.train_on_batch(inout[0], inout[1])
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 973, in train_on_batch
class_weight=class_weight, reset_metrics=reset_metrics)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 264, in train_on_batch
output_loss_metrics=model._output_loss_metrics)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py", line 311, in train_on_batch
output_loss_metrics=output_loss_metrics))
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py", line 268, in _process_single_batch
grads = tape.gradient(scaled_total_loss, trainable_weights)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\eager\backprop.py", line 1014, in gradient
unconnected_gradients=unconnected_gradients)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\eager\imperative_grad.py", line 76, in imperative_grad
compat.as_str(unconnected_gradients.value))
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\eager\backprop.py", line 138, in _gradient_function
return grad_fn(mock_op, *out_grads)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\ops\math_grad.py", line 251, in _MeanGrad
return math_ops.truediv(sum_grad, math_ops.cast(factor, sum_grad.dtype)), None
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 1066, in truediv
return _truediv_python3(x, y, name)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 1005, in _truediv_python3
return gen_math_ops.real_div(x, y, name=name)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 7950, in real_div
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,45000,12,21] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RealDiv] name: truediv/

编辑2和3:这是一个最小的例子。为我打印“11”后失败。 Edit3 显着减小了尺寸。

from tensorflow.keras.models import Model
from tensorflow.keras.layers import *
import tensorflow.keras.backend as K
import numpy as np
import tensorflow

def BuildGenerator():
i = Input(shape=(None,2,))

n_input = 12*21
to_n = Input(shape=(n_input))
s_n = Dense(12*21, activation='softmax')(to_n)
s_n = Reshape((12,21))(s_n)
n_base = Model(inputs=[to_n], outputs=[s_n])

b = Conv1D(n_input, 11, dilation_rate=1, padding='same', activation='relu', data_format='channels_last')(i)
n = TimeDistributed(n_base)(b)

return Model(inputs=[i], outputs=[n])

def InputGenerator():
for iter in range(1000):
print(iter)
i = np.zeros((1,10*60*1000,2))
n = np.zeros((1,10*60*1000,12,21))
yield ([i], [n])

with tensorflow.device('/device:gpu:0'):

m2t = BuildGenerator()

m2t.compile(optimizer='adam', loss='mse')

for epoch in range(1):
for inout in InputGenerator():
m2t.train_on_batch(inout[0], inout[1])

最佳答案

我的简单建议:

  • 将批量大小减小到最小值,从 1 开始,然后增加尺寸。

在大多数情况下,这会有所帮助。

关于python - Tensorflow 2.0 意外 OOM,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58369283/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com