gpt4 book ai didi

optimization - 在 TensorFlow 中停止梯度优化器

转载 作者:行者123 更新时间:2023-12-03 16:14:19 25 4
gpt4 key购买 nike

我正在尝试在 Tensorflow 中构建一个简单的神经网络,但我有一个关于梯度优化的问题。

这可能是一个幼稚的问题,但我是否必须设置条件来停止优化器?下面是来自我的网络的示例打印输出,您可以看到在迭代(所有数据的批量梯度下降)66 之后,成本再次开始增加。那么是否由我来确保优化器此时停止? (注意:我没有把所有的输出都放在这里,但是随着迭代次数的增加,成本开始呈指数增长)。

感谢您的任何指导。

iteration 64 with average cost of 654.621 and diff of 0.462708
iteration 65 with average cost of 654.364 and diff of 0.257202
iteration 66 with average cost of 654.36 and diff of 0.00384521
iteration 67 with average cost of 654.663 and diff of -0.302368
iteration 68 with average cost of 655.328 and diff of -0.665161
iteration 69 with average cost of 656.423 and diff of -1.09497
iteration 70 with average cost of 658.011 and diff of -1.58826

最佳答案

没错 - TensorFlow tf.train.Optimizer 类公开一个 operation that you can run to take one (gradient descent-style) step ,但它们不会监控成本的当前值或决定何时停止,因此一旦网络开始过度拟合,您可能会看到成本增加。

关于optimization - 在 TensorFlow 中停止梯度优化器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34139673/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com