gpt4 book ai didi

python - Tensorflow:训练并不能提高准确性

转载 作者:行者123 更新时间:2023-11-30 09:17:39 24 4
gpt4 key购买 nike

我刚刚开始学习tensorflow,并编写了一个在MNIST上进行锻炼的模型。因此我正在关注一本书,但仍然存在问题,您能帮我解决这个问题吗?

以下是我的代码,里面有问题描述,非常感谢!

x = tf.placeholder(tf.float32,[None,INPUT_NODE],name='input')
y_ = tf.placeholder(tf.float32,[None,OUTPUT_NODE],name='output')
weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE,LAYER1_NODE],stddev=0.1))
biases1 = tf.Variable(tf.constant(0.1,shape=[LAYER1_NODE]))
weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE,OUTPUT_NODE],stddev=0.1))
biases2 = tf.Variable(tf.constant(0.1,shape=[OUTPUT_NODE]))

下一个 y = ()...定义前向传播而不使用移动平均模型。

y = inference(x,None,weights1,biases1,weights2,biases2)
global_step = tf.Variable(0,trainable=False)
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY,global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())

下一个average_y =()...使用移动平均模型定义前向传播。

average_y = inference(x,variable_averages,weights1,biases1,weights2,biases2)

cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.arg_max(y_,1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
regularization = regularizer(variable_averages.average(weights1)) +\
regularizer(variable_averages.average(weights2))
loss = cross_entropy_mean + regularization
learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
mnist.train.num_examples / BATCH_SIZE,
LEARNING_RATE_DECAY
)
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)
train_op = tf.group(train_step,variables_averages_op)

问题是当我使用average_y来计算准确度时,似乎训练根本无助于提高:

经过 0 个训练步骤后,验证中的 acc 为 0.0742

经过 1000 个训练步骤后,验证中的 acc 为 0.0924

经过 2000 个训练步骤后,验证中的 acc 为 0.0924

当我使用 y 而不是average_y时,一切都很好。这真的让我很困惑:

经过 0 个训练步骤后,验证中的 acc 为 0.0686

经过 1000 个训练步骤后,验证中的 acc 为 0.9716

经过 2000 个训练步骤后,验证中的 acc 为 0.9768

#correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
correct_prediction = tf.equal(tf.arg_max(average_y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
with tf.Session() as sess:
tf.initialize_all_variables().run()
validate_feed = {
x:mnist.validation.images,
y_:mnist.validation.labels
}
test_feed={
x:mnist.test.images,
y_:mnist.test.labels
}
for i in range(TRAINING_STEPS):
if i%1000 == 0:
validate_acc = sess.run(accuracy,feed_dict=validate_feed)
print("After %d training steps, acc in validatation is %g"%(i,validate_acc))
xs,ys = mnist.train.next_batch(BATCH_SIZE)
sess.run([train_op,global_step],feed_dict={x:xs,y_:ys})
test_acc = sess.run(accuracy,feed_dict=test_feed)
print("After %d training steps, acc in test is %g" % (TRAINING_STEPS, test_acc))

最佳答案

从您的代码片段中,您正在训练相对于 y logits 而不是 average_y 的分类损失,因此具有指数移动平均线的推理图实际上没有经过训练

cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.arg_max(y_,1))

关于python - Tensorflow:训练并不能提高准确性,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50829631/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com