gpt4 book ai didi

machine-learning - 在tensorflow MNIST教程中添加更多层会导致准确性下降

转载 作者:行者123 更新时间:2023-11-30 08:51:11 26 4
gpt4 key购买 nike

深度学习新手。通过 gogoel tensorflow ( https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py ) 的 MNIST_SOFTMAX.py 教程,我添加了两个新层,只是为了看看会发生什么。

x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b

将上面的代码更改为

x = tf.placeholder(tf.float32, [None, 784])
W1 = tf.Variable(tf.zeros([784, 256]))
W2 = tf.Variable(tf.zeros([256, 256]))
W3 = tf.Variable(tf.zeros([256, 10]))

B1 = tf.Variable(tf.zeros([256]))
B2 = tf.Variable(tf.zeros([256]))
B3 = tf.Variable(tf.zeros([10]))

Y1 = tf.matmul(x, W1) + B1
Y2 = tf.matmul(Y1, W2) + B2
Y3 = tf.matmul(Y2, W3) + B3
y = Y3

精度从 0.9188 下降到 0.1028。我能知道为什么它会下降吗?

最佳答案

我认为你两个都需要symmetry breaking in the weights以及层之间的非线性激活:

W = tf.Variable(tf.random_normal([784, 256], stddev=0.1)) 
W1 = tf.Variable(tf.random_normal([256, 256], stddev=0.1))
W2 = tf.Variable(tf.random_normal([256, 10], stddev=0.1))
b = tf.Variable(tf.zeros([256]))
b1 = tf.Variable(tf.zeros([256]))
b2 = tf.Variable(tf.zeros([10]))

y = tf.matmul(x, W) + b
y = tf.nn.relu(y)
y = tf.matmul(y, W1) + b1
y = tf.nn.relu(y)
y = tf.matmul(y, W2) + b2

准确度为 0.9653。

关于machine-learning - 在tensorflow MNIST教程中添加更多层会导致准确性下降,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41993311/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com