gpt4 book ai didi

python - 为什么单独执行 softmax 和交叉熵会产生与使用 softmax_cross_entropy_with_logits 一起执行时不同的结果?

转载 作者:行者123 更新时间:2023-11-28 22:27:47 27 4
gpt4 key购买 nike

我正在制作一台计算机,使用 softmax 函数从 MNist 数据集中预测手写数字。奇怪的事情发生了。成本随着时间的推移而降低,最终变为 0.0038 左右……(我使用 softmax_crossentropy_with_logits() 作为成本函数)但是,准确率低至 33%。所以我想“嗯……我不知道那里发生了什么,但如果我分别做 softmax 和交叉熵,可能会产生不同的结果!”和繁荣!准确率高达 89%。我不知道为什么分别做 softmax 和交叉熵会产生如此不同的结果。我什至在这里查找:difference between tensorflow tf.nn.softmax and tf.nn.softmax_cross_entropy_with_logits

所以这是我将 softmax_cross_entropy_with_logits() 用于成本函数的代码(准确度:33%)

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

X = tf.placeholder(shape=[None,784],dtype=tf.float32)
Y = tf.placeholder(shape=[None,10],dtype=tf.float32)

W1= tf.Variable(tf.random_normal([784,20]))
b1= tf.Variable(tf.random_normal([20]))
layer1 = tf.nn.softmax(tf.matmul(X,W1)+b1)

W2 = tf.Variable(tf.random_normal([20,10]))
b2 = tf.Variable(tf.random_normal([10]))

logits = tf.matmul(layer1,W2)+b2
hypothesis = tf.nn.softmax(logits) # just so I can figure our the accuracy

cost_i= tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=Y)
cost = tf.reduce_mean(cost_i)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)


batch_size = 100
train_epoch = 25
display_step = 1
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())

for epoch in range(train_epoch):
av_cost = 0
total_batch = int(mnist.train.num_examples / batch_size)
for batch in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer,feed_dict={X:batch_xs,Y:batch_ys})
av_cost += sess.run(cost,feed_dict={X:batch_xs,Y:batch_ys})/total_batch
if epoch % display_step == 0: # Softmax
print ("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(av_cost))
print ("Optimization Finished!")

correct_prediction = tf.equal(tf.argmax(hypothesis,1),tf.argmax(Y,1))
accuray = tf.reduce_mean(tf.cast(correct_prediction,'float32'))
print("Accuracy:",sess.run(accuray,feed_dict={X:mnist.test.images,Y:mnist.test.labels}))

这是我分别做softmax和cross_entropy的那个(准确率:89%)

import tensorflow as tf  #89 % accuracy one 
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

X = tf.placeholder(shape=[None,784],dtype=tf.float32)
Y = tf.placeholder(shape=[None,10],dtype=tf.float32)

W1= tf.Variable(tf.random_normal([784,20]))
b1= tf.Variable(tf.random_normal([20]))
layer1 = tf.nn.softmax(tf.matmul(X,W1)+b1)

W2 = tf.Variable(tf.random_normal([20,10]))
b2 = tf.Variable(tf.random_normal([10]))


#logits = tf.matmul(layer1,W2)+b2
#cost_i= tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=Y)

logits = tf.matmul(layer1,W2)+b2

hypothesis = tf.nn.softmax(logits)
cost = tf.reduce_mean(tf.reduce_sum(-Y*tf.log(hypothesis)))


optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)

batch_size = 100
train_epoch = 25
display_step = 1
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())

for epoch in range(train_epoch):
av_cost = 0
total_batch = int(mnist.train.num_examples / batch_size)
for batch in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer,feed_dict={X:batch_xs,Y:batch_ys})
av_cost += sess.run(cost,feed_dict={X:batch_xs,Y:batch_ys})/total_batch
if epoch % display_step == 0: # Softmax
print ("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(av_cost))
print ("Optimization Finished!")

correct_prediction = tf.equal(tf.argmax(hypothesis,1),tf.argmax(Y,1))
accuray = tf.reduce_mean(tf.cast(correct_prediction,'float32'))
print("Accuracy:",sess.run(accuray,feed_dict={X:mnist.test.images,Y:mnist.test.labels}))

最佳答案

如果您在上面的示例中使用 tf.reduce_sum(),就像您在下面的示例中所做的那样,您应该能够使用这两种方法获得相似的结果:cost = tf. reduce_mean(tf.reduce_sum( tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))).

我将训练 epoch 的数量增加到 50,并获得了 93.06%(tf.nn.softmax_cross_entropy_with_logits())和 93.24%(分别使用 softmax 和交叉熵)的准确率,所以结果相当不错相似的。

关于python - 为什么单独执行 softmax 和交叉熵会产生与使用 softmax_cross_entropy_with_logits 一起执行时不同的结果?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43929501/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com