gpt4 book ai didi

python - 迁移学习——尝试在内存不足的 RTX 2070 上重新训练 efficientnet-B07

转载 作者:行者123 更新时间:2023-12-05 02:58:33 26 4
gpt4 key购买 nike

这是我在 64gb ram CPU 上尝试运行的训练代码暗恋 RTX 2070

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.7
tf.keras.backend.set_session(tf.Session(config=config))

model = efn.EfficientNetB7()
model.summary()

# create new output layer
output_layer = Dense(5, activation='sigmoid', name="retrain_output")(model.get_layer('top_dropout').output)
new_model = Model(model.input, output=output_layer)
new_model.summary()
# lock previous weights

for i, l in enumerate(new_model.layers):
if i < 228:
l.trainable = False
# lock probs weights

new_model.compile(loss='mean_squared_error', optimizer='adam')

batch_size = 5
samples_per_epoch = 30
epochs = 20

# generate train data
train_datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0)

train_generator = train_datagen.flow_from_directory(
train_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='training')

validation_generator = train_datagen.flow_from_directory(
validation_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='validation')

new_model.fit_generator(
train_generator,
samples_per_epoch=samples_per_epoch,
epochs=epochs,
validation_steps=20,
validation_data=validation_generator,
nb_worker=24)

new_model.save(model_output_path)



exception:

2019-11-17 08:52:52.903583: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally .... ... 2019-11-17 08:53:24.713020: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 110 Chunks of size 27724800 totalling 2.84GiB 2019-11-17 08:53:24.713024: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 6 Chunks of size 38814720 totalling 222.10MiB 2019-11-17 08:53:24.713027: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 23 Chunks of size 54000128 totalling 1.16GiB 2019-11-17 08:53:24.713031: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 73760000 totalling 70.34MiB 2019-11-17 08:53:24.713034: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 5.45GiB 2019-11-17 08:53:24.713040: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: Limit: 5856749158 InUse: 5848048896 MaxInUse: 5848061440 NumAllocs: 6140 MaxAllocSize: 3259170816

2019-11-17 08:53:24.713214: W tensorflow/core/common_runtime/bfc_allocator.cc:271] **************************************************************************************************** 2019-11-17 08:53:24.713232: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[5,1344,38,38] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File "/home/naort/Desktop/deep-learning-data-preparation-tools/EfficientNet-Transfer-Learning-Boiler-Plate/model_retrain.py", line 76, in nb_worker=24) File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1732, in fit_generator initial_epoch=initial_epoch) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py", line 220, in fit_generator reset_metrics=False) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1514, in train_on_batch outputs = self.train_function(ins) File "/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 3076, in call run_metadata=self.run_metadata) File "/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in call run_metadata_ptr) File "/home/naort/.local/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[5,1344,38,38] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node training/Adam/gradients/AddN_387-0-TransposeNHWCToNCHW-LayoutOptimizer}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[{{node Mean}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

最佳答案

尽管 EfficientNet 模型的参数数量低于对比 ResNe(X)t 模型,但它们仍然消耗大量 GPU 内存。您看到的是 GPU 内存不足错误(RTX 2070 为 8GB),而不是系统内存不足错误(64GB)。

B7 模型,尤其是在全分辨率下,超出了您想要使用单张 RTX 2070 卡进行训练的范围。即使卡住了很多层。

在 FP16 中运行模型可能会有所帮助,这也将利用您的 RTX 卡的 TensorCore。来自 https://medium.com/@noel_kennedy/how-to-use-half-precision-float16-when-training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4 ,试试这个:

import keras.backend as K

dtype='float16'
K.set_floatx(dtype)

# default is 1e-7 which is too small for float16. Without adjusting the epsilon, we will get NaN predictions because of divide by zero problems
K.set_epsilon(1e-4)

关于python - 迁移学习——尝试在内存不足的 RTX 2070 上重新训练 efficientnet-B07,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58898253/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com