gpt4 book ai didi

tensorflow - 如何在 tensorflow 中使用带有估计器的衰减学习率?

转载 作者:行者123 更新时间:2023-12-03 11:16:00 25 4
gpt4 key购买 nike

我正在尝试将 LinearClassifier 与具有衰减学习率的 GradientDescentOptimizer 一起使用。

我的代码:

def main():
# load data
features = np.load('data/feature_data.npz')
tx = features['arr_0']
y = features['arr_1']

## Prepare logistic regression
n_point, n_feat = tx.shape

# Input functions
def get_input_fn_from_numpy(tx, y, num_epochs=None, shuffle=True):
# Preprocess data
return tf.estimator.inputs.numpy_input_fn(
x={"x":tx},
y=y,
num_epochs=num_epochs,
shuffle=shuffle,
batch_size=128
)

cols_label = "x"
feature_cols = [tf.contrib.layers.real_valued_column(cols_label)]

my_input_fn_train = get_input_fn_from_numpy(tx, y)

model_dir = 'data/tmp/' + datetime.datetime.now().strftime("%m-%d_%H:%M:%S")
global_step = tf.Variable(0, trainable=False)
learning_rate=tf.train.exponential_decay(0.001*np.ones((20,1), dtype=np.float32), global_step, 10000, 0.95, staircase=False)
regressor = tf.contrib.learn.LinearClassifier(feature_columns=feature_cols,
model_dir=model_dir,
optimizer=tf.train.GradientDescentOptimizer(learning_rate=learning_rate))

regressor.fit(input_fn=get_input_fn_from_numpy(tx_train, y_train), steps=100000)
results = regressor.evaluate(input_fn=my_input_fn_test)

我得到错误:
  File "training.py", line 71, in <module>
main()
File "training.py", line 63, in main
regressor.fit(input_fn=get_input_fn_from_numpy(tx_train, y_train), steps=100000)
File "/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 458, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 958, in _train_model
model_fn_ops = self._get_train_ops(features, labels)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1165, in _get_train_ops
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1136, in _call_model_fn
model_fn_results = self._model_fn(features, labels, **kwargs)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py", line 186, in _linear_model_fn
logits=logits)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py", line 854, in create_model_fn_ops
enable_centered_bias=self._enable_centered_bias)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py", line 649, in _create_model_fn_ops
batch_size, loss_fn, weight_tensor)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py", line 1911, in _train_op
train_op = train_op_fn(loss)
File "/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py", line 179, in _train_op_fn
zip(grads, my_vars), global_step=global_step))
File "/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 456, in apply_gradients
update_ops.append(processor.update_op(self, grad))
File "/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 97, in update_op
return optimizer._apply_dense(g, self._v) # pylint: disable=protected-access
File "/lib/python3.6/site-packages/tensorflow/python/training/gradient_descent.py", line 50, in _apply_dense
use_locking=self._use_locking).op
File "/lib/python3.6/site-packages/tensorflow/python/training/gen_training_ops.py", line 370, in apply_gradient_descent
name=name)
File "/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 330, in apply_op
g = ops._get_graph_from_inputs(_Flatten(keywords.values()))
File "/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 4262, in _get_graph_from_inputs
_assert_same_graph(original_graph_element, graph_element)
File "/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 4201, in _assert_same_graph
"%s must be from the same graph as %s." % (item, original_item))
ValueError: Tensor("ExponentialDecay:0", shape=(20, 1), dtype=float32) must be from the same graph as Tensor("linear/x/weight/part_0:0", shape=(20, 1), dtype=float32_ref).

我正在使用 tensorflow 1.3。
如果我用常数替换学习率,比如 0.01,它会起作用。我过去结合使用了衰减学习率和最小化操作,但试图在 LinearClassifier 中使用它。
我发现有些事情似乎不一致,因为我没有将全局步骤与适合的步骤联系起来,但想知道这是如何工作的。我想我可以按照建议的 here 使用占位符,但我不明白为什么我不需要自己编写更新规则。

关于如何解决这个问题的任何建议?

最佳答案

您是否尝试获取 global_step调用 tf.train.get_global_step() ?这应该返回 global_step由您的LinearClassifier 使用模型。

代替

global_step = tf.Variable(0, trainable=False)

采用
global_step = tf.train.get_global_step()

这对我使用我自己的 Estimator 有效类,我使用 tf.train.MomentumOptimizer最小化 tf.nn.sparse_softmax_cross_entropy_with_logits .

关于tensorflow - 如何在 tensorflow 中使用带有估计器的衰减学习率?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45844320/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com