gpt4 book ai didi

tensorflow - 将 tf.train.exponential_decay 与预定义的估算器一起使用?

转载 作者:行者123 更新时间:2023-12-03 14:57:20 27 4
gpt4 key购买 nike

我正在尝试将 tf.train.exponential_decay 与预定义的估算器一起使用,但由于某种原因,这被证明是非常困难的。我在这里错过了什么吗?

这是我具有恒定学习率的旧代码:

classifier = tf.estimator.DNNRegressor(
feature_columns=f_columns,
model_dir='./TF',
hidden_units=[2, 2],
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=0.50,
l1_regularization_strength=0.001,
))

现在我尝试添加这个:
starter_learning_rate = 0.50
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
10000, 0.96, staircase=True)

但现在呢?
  • estimator.predict() 不接受 global_step 所以它会卡在 0?
  • 即使我将 learning_rate 传递给 tf.train.ProximalAdagradOptimizer() 我也会收到一条错误消息

  • "ValueError: Tensor("ExponentialDecay:0", shape=(), dtype=float32) must be from the same graph as Tensor("dnn/hiddenlayer_0/kernel/part_0:0", shape=(62, 2), dtype=float32_ref)."



    非常感谢您的帮助。顺便说一句,我正在使用 TF1.6。

    最佳答案

    你应该让优化器在 mode == tf.estimator.ModeKeys.TRAIN

    这是示例代码

    def _model_fn(features, labels, mode, config):

    # xxxxxxxxx
    # xxxxxxxxx

    assert mode == tf.estimator.ModeKeys.TRAIN

    global_step = tf.train.get_global_step()
    decay_learning_rate = tf.train.exponential_decay(learning_rate, global_step, 100, 0.98, staircase=True)
    optimizer = adagrad.AdagradOptimizer(decay_learning_rate)

    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    with tf.control_dependencies(update_ops):
    train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op, training_chief_hooks=chief_hooks, eval_metric_ops=metrics)

    关于tensorflow - 将 tf.train.exponential_decay 与预定义的估算器一起使用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49224141/

    27 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com