gpt4 book ai didi

tensorflow - Keras 模型在微调时变得更糟

转载 作者:行者123 更新时间:2023-12-05 04:52:31 25 4
gpt4 key购买 nike

我正在尝试按照 https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets 中描述的微调步骤进行操作获得经过训练的二进制分割模型。

我创建了一个编码器-解码器,编码器的权重是 MobileNetV2 的权重,并固定为 encoder.trainable = False。然后,我按照教程中的说明定义我的解码器,并使用 0.005 的学习率训练网络 300 个时期。我在最后几个时期得到以下损失值和 Jaccard 指数:

Epoch 297/300
55/55 [==============================] - 85s 2s/step - loss: 0.2443 - jaccard_sparse3D: 0.5556 - accuracy: 0.9923 - val_loss: 0.0440 - val_jaccard_sparse3D: 0.3172 - val_accuracy: 0.9768
Epoch 298/300
55/55 [==============================] - 75s 1s/step - loss: 0.2437 - jaccard_sparse3D: 0.5190 - accuracy: 0.9932 - val_loss: 0.0422 - val_jaccard_sparse3D: 0.3281 - val_accuracy: 0.9776
Epoch 299/300
55/55 [==============================] - 78s 1s/step - loss: 0.2465 - jaccard_sparse3D: 0.4557 - accuracy: 0.9936 - val_loss: 0.0431 - val_jaccard_sparse3D: 0.3327 - val_accuracy: 0.9769
Epoch 300/300
55/55 [==============================] - 85s 2s/step - loss: 0.2467 - jaccard_sparse3D: 0.5030 - accuracy: 0.9923 - val_loss: 0.0463 - val_jaccard_sparse3D: 0.3315 - val_accuracy: 0.9740

我存储该模型的所有权重,然后通过以下步骤计算微调:

model.load_weights('my_pretrained_weights.h5')
model.trainable = True
model.compile(optimizer=Adam(learning_rate=0.00001, name='adam'),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[jaccard, "accuracy"])
model.fit(training_generator, validation_data=(val_x, val_y), epochs=5,
validation_batch_size=2, callbacks=callbacks)

突然间,我的模型的性能比解码器训练期间差得多:

Epoch 1/5
55/55 [==============================] - 89s 2s/step - loss: 0.2417 - jaccard_sparse3D: 0.0843 - accuracy: 0.9946 - val_loss: 0.0079 - val_jaccard_sparse3D: 0.0312 - val_accuracy: 0.9992
Epoch 2/5
55/55 [==============================] - 90s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1179 - accuracy: 0.9927 - val_loss: 0.0138 - val_jaccard_sparse3D: 7.1138e-05 - val_accuracy: 0.9998
Epoch 3/5
55/55 [==============================] - 95s 2s/step - loss: 0.2173 - jaccard_sparse3D: 0.1227 - accuracy: 0.9932 - val_loss: 0.0171 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 0.9999
Epoch 4/5
55/55 [==============================] - 94s 2s/step - loss: 0.2428 - jaccard_sparse3D: 0.1319 - accuracy: 0.9927 - val_loss: 0.0190 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000
Epoch 5/5
55/55 [==============================] - 97s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1107 - accuracy: 0.9926 - val_loss: 0.0215 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000

发生这种情况是否有任何已知原因?正常吗?提前致谢!

最佳答案

好吧,我发现了我所做的不同,这使得它不需要编译。我没有设置 encoder.trainable = False。我在下面的代码中所做的是等效的

for layer in encoder.layers:
layer.trainable=False

然后训练您的模型。然后你可以解冻编码器权重

for layer in encoder.layers:
layer.trainable=True

您不需要重新编译模型。我对此进行了测试,它按预期工作。你可以通过前后打印模型摘要进行验证,并查看可训练参数的数量。至于改变学习率,我发现最好使用 keras 回调 ReduceLROnPlateau 来根据验证损失自动调整学习率。我还建议使用 EarlyStopping 回调,它会监控验证并在损失在连续“耐心”次数后未能减少时停止训练。设置 restore_best_weights=True 将加载验证损失最低的时期的权重,因此您不必保存然后重新加载权重。将 epochs 设置为较大的数字以确保此回调激活。我使用的代码如下所示

es=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=3,
verbose=1, restore_best_weights=True)
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1,
verbose=1)
callbacks=[es, rlronp]

在 model.fit 中设置 callbacks=callbacks

关于tensorflow - Keras 模型在微调时变得更糟,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66460418/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com