gpt4 book ai didi

machine-learning - 如何使用 tf.contrib.opt.ScipyOptimizerInterface 获取损失函数历史记录

转载 作者:行者123 更新时间:2023-11-30 08:28:55 33 4
gpt4 key购买 nike

我需要获取一段时间内的损失历史记录,以便将其绘制在图表中。这是我的代码框架:

optimizer = tf.contrib.opt.ScipyOptimizerInterface(loss, method='L-BFGS-B', 
options={'maxiter': args.max_iterations, 'disp': print_iterations})
optimizer.minimize(sess, loss_callback=append_loss_history)

使用append_loss_history定义:

def append_loss_history(**kwargs):
global step
if step % 50 == 0:
loss_history.append(loss.eval())
step += 1

当我看到 ScipyOptimizerInterface 的详细输出时,损失实际上随着时间的推移而减少。但当我打印 loss_history 时,随着时间的推移,损失几乎相同。

引用文档:“要优化的变量在优化结束时就地更新” https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ScipyOptimizerInterface 。这就是亏损没有改变的原因吗?

最佳答案

我认为你已经解决了问题;变量本身在优化结束之前不会被修改(而不是 being fed to session.run calls ),并且评估“反向 channel ”张量会获得未修改的变量。相反,请使用 optimizer.minimizefetches 参数来搭载指定了 feed 的 session.run 调用:

import tensorflow as tf

def print_loss(loss_evaled, vector_evaled):
print(loss_evaled, vector_evaled)

vector = tf.Variable([7., 7.], 'vector')
loss = tf.reduce_sum(tf.square(vector))

optimizer = tf.contrib.opt.ScipyOptimizerInterface(
loss, method='L-BFGS-B',
options={'maxiter': 100})

with tf.Session() as session:
tf.global_variables_initializer().run()
optimizer.minimize(session,
loss_callback=print_loss,
fetches=[loss, vector])
print(vector.eval())

(修改自 example in the documentation )。这将打印具有更新值的张量:

98.0 [ 7.  7.]
79.201 [ 6.29289341 6.29289341]
7.14396e-12 [ -1.88996808e-06 -1.88996808e-06]
[ -1.88996808e-06 -1.88996808e-06]

关于machine-learning - 如何使用 tf.contrib.opt.ScipyOptimizerInterface 获取损失函数历史记录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44685228/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com