gpt4 book ai didi

python - 学习率大于 0.001 会导致错误

转载 作者:太空宇宙 更新时间:2023-11-04 08:49:42 25 4
gpt4 key购买 nike

我尝试将 Udacity 深度学习类(class)(作业 3 - 正则化)和 Tensorflow mnist_with_summaries.py 教程中的代码整合在一起。我的代码似乎运行良好

https://github.com/llevar/udacity_deep_learning/blob/master/multi-layer-net.py

但有些奇怪的事情正在发生。这些作业都使用 0.5 的学习率,并在某些时候引入指数衰减。但是,只有当我将学习率设置为 0.001(有或没有衰减)时,我放在一起的代码才能正常运行。如果我将初始速率设置为 0.1 或更高,我会收到以下错误:

Traceback (most recent call last):
File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 175, in <module>
summary, my_accuracy, _ = my_session.run([merged, accuracy, train_step], feed_dict=feed_dict)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Nan in summary histogram for: layer1/weights/summaries/HistogramSummary
[[Node: layer1/weights/summaries/HistogramSummary = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](layer1/weights/summaries/HistogramSummary/tag, layer1/weights/Variable/read)]]
Caused by op u'layer1/weights/summaries/HistogramSummary', defined at:
File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 106, in <module>
layer1, weights_1 = nn_layer(x, num_features, 1024, 'layer1')
File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 79, in nn_layer
variable_summaries(weights, layer_name + '/weights')
File "/Users/siakhnin/Documents/workspace/udacity_deep_learning/multi-layer-net.py", line 65, in variable_summaries
tf.histogram_summary(name, var)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/logging_ops.py", line 113, in histogram_summary
tag=tag, values=values, name=scope)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _histogram_summary
name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__
self._traceback = _extract_stack()

如果我将速率设置为 0.001,则代码运行完成,测试精度为 0.94。

在 Mac OS X 上使用 tensorflow 0.8 RC0。

最佳答案

看起来您的训练发散(这会导致您得到无穷大或 NaN)。对于为什么事物在某些条件下会发散而在其他情况下不会发散,没有简单的解释,但通常较高的学习率会使其更容易发散。

编辑,4 月 17 日您在 Histogram 摘要中得到一个 NaN,这很可能意味着您的权重或激活中有一个 NaNNaN 是由不正确的数值计算引起的,即取 0 的对数并将结果乘以 0。直方图中也有可能存在一些错误,要排除这种情况,请关闭汇总,看看是否你仍然能够训练到很好的准确性。

要关闭摘要,请替换此行 合并 = tf.merge_all_summaries()

有了这个

merged = tf.constant(1)

并注释掉这一行

test_writer.add_summary(summary)

关于python - 学习率大于 0.001 会导致错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36666331/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com