gpt4 book ai didi

python - 如何在 Tensorflow 中实现提前停止并降低平台学习率?

转载 作者:太空宇宙 更新时间:2023-11-04 04:12:20 25 4
gpt4 key购买 nike

我想为使用 tensorflow 构建的神经网络模型实现两个回调 EarlyStoppingReduceLearningRateOnPlateau。 (我没有使用 Keras)

下面的示例代码是我在自己写的脚本中实现早停的方法,不知道对不对。

# A list to record loss on validation set
val_buff = []
# If early_stop == True, then terminate training process
early_stop = False

while icount < maxEpoches:

'''Shuffle the training set'''
'''Update the model by using Adam optimizer over the entire training set'''

# Evaluate loss on validation set
val_loss = self.sess.run(self.loss, feed_dict = feeddict_val)
val_buff.append(val_loss)

if icount % ep == 0:

diff = np.array([val_buff[ind] - val_buff[ind - 1] for ind in range(1, len(val_buff))])
bad = len(diff[diff > 0])
if bad > 0.5 * len(diff):
early_stop = True

if early_stop:
self.saver.save(self.sess, 'model.ckpt')
raise OverFlow()
val_buff = []

icount += 1

当我训练模型并跟踪验证集上的损失时,我发现损失上下波动,因此很难判断模型何时开始过度拟合。

由于EarlystoppingReduceLearningRateOnPlateau非常相似,我如何修改上面的代码来实现ReduceLearningRateOnPlateau

最佳答案

振荡误差/损失很常见。实现提前停止或学习率降低规则的主要问题是验证损失计算相对较晚发生。为了解决这个问题,我可能会建议下一条规则:当最佳验证错误至少过去 N 个时期时停止训练。

max_stagnation = 5 # number of epochs without improvement to tolerate
best_val_loss, best_val_epoch = None, None

for epoch in range(max_epochs):
# train an epoch ...
val_loss = evaluate()
if best_val_loss is None or best_val_loss < val_loss:
best_val_loss, best_val_epoch = val_loss, epoch
if best_val_epoch < epoch - max_stagnation:
# nothing is improving for a while
early_stop = True
break

关于python - 如何在 Tensorflow 中实现提前停止并降低平台学习率?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56106332/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com