gpt4 book ai didi

Why learning is stopped????? I used Earlystopping callback in my tensorflow model(为什么学习停止了?我在TensorFlow模型中使用了EarlStoping回调)

转载 作者:bug小助手 更新时间:2023-10-25 23:28:47 26 4
gpt4 key购买 nike



I have question
I apply EarlyStoping callback to my model to limit epochs.
Until now, I knew that this callback automatically stops learning when the indicators I set do not improve continuously, but learning keeps stopping at points I don't want. I would be grateful if you could tell me what the cause is.

我有一个问题,我将EarlyStping回调应用到我的模型中以限制纪元。到目前为止,我知道当我设置的指标没有持续改进时,这个回调会自动停止学习,但学习会在我不想要的点上停止。如果你能告诉我原因是什么,我将不胜感激。


[my code]

[我的代码]


from keras.callbacks import EarlyStopping, LearningRateScheduler, ModelCheckpoint
def early_stopping(patience=5, monitor="val_loss"):
callback = EarlyStopping(monitor=monitor, patience=5)
return callback

def lr_scheduler(epoch=10, ratio=0.1):
"""
epoch 개수 이후로 lr 1/10씩 진행
"""

def lr_scheduler_func(e, lr):
if e < epoch:
return lr
else:
return lr * ratio

callback = LearningRateScheduler(lr_scheduler_func)
return callback

def checkpoint(
filepath,
monitor="val_accuracy",
save_best_only=True,
mode="max",
save_weights_only=True,
):
callback = ModelCheckpoint(
filepath=filepath,
monitor=monitor, # 모니터링할 지표 설정
verbose=1,
save_best_only=save_best_only, # 가장 좋은 성능을 보인 모델만 저장
mode=mode, # 최대값을 갖는 경우를 모니터링
save_weights_only=save_weights_only, # 전체 모델을 저장 (아키텍처와 가중치 모두 저장)
)
return callback

EarlyStopping = early_stopping()
LearningRateScheduler = lr_scheduler(20)
CheckPoint = checkpoint("./epic_models/DN_TL_230909_2.h5")
history = model.fit(
train_data,
epochs=50,
validation_data=valid_data,
callbacks=[EarlyStopping, LearningRateScheduler, CheckPoint],
)



[output]

[输出]


Epoch 1/50
2023-09-09 22:42:38.446232: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
416/416 [==============================] - ETA: 0s - loss: 1.1175 - accuracy: 0.66372023-09-09 22:44:14.590122: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.

Epoch 1: val_accuracy improved from -inf to 0.78931, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 124s 289ms/step - loss: 1.1175 - accuracy: 0.6637 - val_loss: 0.5924 - val_accuracy: 0.7893 - lr: 0.0010
Epoch 2/50
416/416 [==============================] - ETA: 0s - loss: 0.4517 - accuracy: 0.8430
Epoch 2: val_accuracy improved from 0.78931 to 0.83670, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 143s 344ms/step - loss: 0.4517 - accuracy: 0.8430 - val_loss: 0.4349 - val_accuracy: 0.8367 - lr: 0.0010
Epoch 3/50
416/416 [==============================] - ETA: 0s - loss: 0.3435 - accuracy: 0.8760
Epoch 3: val_accuracy improved from 0.83670 to 0.83972, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 164s 394ms/step - loss: 0.3435 - accuracy: 0.8760 - val_loss: 0.3872 - val_accuracy: 0.8397 - lr: 0.0010
Epoch 4/50
416/416 [==============================] - ETA: 0s - loss: 0.2851 - accuracy: 0.8946
Epoch 4: val_accuracy improved from 0.83972 to 0.86115, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 178s 428ms/step - loss: 0.2851 - accuracy: 0.8946 - val_loss: 0.3451 - val_accuracy: 0.8612 - lr: 0.0010
Epoch 5/50
416/416 [==============================] - ETA: 0s - loss: 0.2453 - accuracy: 0.9057
Epoch 5: val_accuracy improved from 0.86115 to 0.87534, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 188s 452ms/step - loss: 0.2453 - accuracy: 0.9057 - val_loss: 0.3179 - val_accuracy: 0.8753 - lr: 0.0010
Epoch 6/50
416/416 [==============================] - ETA: 0s - loss: 0.2240 - accuracy: 0.9113
Epoch 6: val_accuracy improved from 0.87534 to 0.88711, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 184s 444ms/step - loss: 0.2240 - accuracy: 0.9113 - val_loss: 0.2909 - val_accuracy: 0.8871 - lr: 0.0010
Epoch 7/50
416/416 [==============================] - ETA: 0s - loss: 0.2000 - accuracy: 0.9212
Epoch 7: val_accuracy did not improve from 0.88711
416/416 [==============================] - 191s 459ms/step - loss: 0.2000 - accuracy: 0.9212 - val_loss: 0.3114 - val_accuracy: 0.8775 - lr: 0.0010
Epoch 8/50
416/416 [==============================] - ETA: 0s - loss: 0.1830 - accuracy: 0.9280
Epoch 8: val_accuracy did not improve from 0.88711
416/416 [==============================] - 193s 463ms/step - loss: 0.1830 - accuracy: 0.9280 - val_loss: 0.3300 - val_accuracy: 0.8723 - lr: 0.0010
Epoch 9/50
416/416 [==============================] - ETA: 0s - loss: 0.1666 - accuracy: 0.9324
Epoch 9: val_accuracy did not improve from 0.88711
416/416 [==============================] - 198s 476ms/step - loss: 0.1666 - accuracy: 0.9324 - val_loss: 0.3219 - val_accuracy: 0.8787 - lr: 0.0010
Epoch 10/50
416/416 [==============================] - ETA: 0s - loss: 0.1579 - accuracy: 0.9335
Epoch 10: val_accuracy did not improve from 0.88711
416/416 [==============================] - 201s 483ms/step - loss: 0.1579 - accuracy: 0.9335 - val_loss: 0.3707 - val_accuracy: 0.8596 - lr: 0.0010
Epoch 11/50
416/416 [==============================] - ETA: 0s - loss: 0.1477 - accuracy: 0.9401
Epoch 11: val_accuracy did not improve from 0.88711
416/416 [==============================] - 202s 486ms/step - loss: 0.1477 - accuracy: 0.9401 - val_loss: 0.3081 - val_accuracy: 0.8832 - lr: 0.0010

更多回答
优秀答案推荐

I'm so sorry to ask this problem.
I found continuous policy depends on BEST score
If results of continuous 5 epochs are under than best val_loss, learning end.
Thank you.

我很抱歉问这个问题。我发现,如果连续5个时期的结果低于最佳值,则连续策略依赖于最佳分数,学习结束。谢谢。


更多回答

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com