gpt4 book ai didi

python - 为什么这个神经网络什么也没学到?

转载 作者:太空狗 更新时间:2023-10-30 01:04:58 26 4
gpt4 key购买 nike

我正在学习 TensorFlow 并正在实现一个简单的神经网络,如 TensorFlow 文档中的 MNIST for Beginners 中所述。这是 link .正如预期的那样,准确率约为 80-90%。

然后在同一篇文章之后是 MNIST for Experts using ConvNet。我决定改进初学者部分,而不是实现它。我了解神经网络及其学习方式,以及深度网络比浅层网络表现更好的事实。我为初学者修改了 MNIST 中的原始程序,以实现一个具有 2 个隐藏层的神经网络,每个隐藏层有 16 个神经元。

看起来像这样:

网络形象

the neural network i build

代码

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

x = tf.placeholder(tf.float32, [None, 784], 'images')
y = tf.placeholder(tf.float32, [None, 10], 'labels')

# We are going to make 2 hidden layer neurons with 16 neurons each

# All the weights in network
W0 = tf.Variable(dtype=tf.float32, name='InputLayerWeights', initial_value=tf.zeros([784, 16]))
W1 = tf.Variable(dtype=tf.float32, name='HiddenLayer1Weights', initial_value=tf.zeros([16, 16]))
W2 = tf.Variable(dtype=tf.float32, name='HiddenLayer2Weights', initial_value=tf.zeros([16, 10]))

# All the biases for the network
B0 = tf.Variable(dtype=tf.float32, name='HiddenLayer1Biases', initial_value=tf.zeros([16]))
B1 = tf.Variable(dtype=tf.float32, name='HiddenLayer2Biases', initial_value=tf.zeros([16]))
B2 = tf.Variable(dtype=tf.float32, name='OutputLayerBiases', initial_value=tf.zeros([10]))


def build_graph():
"""This functions wires up all the biases and weights of the network
and returns the last layer connections
:return: returns the activation in last layer of network/output layer without softmax
"""
A1 = tf.nn.relu(tf.matmul(x, W0) + B0)
A2 = tf.nn.relu(tf.matmul(A1, W1) + B1)
return tf.matmul(A2, W2) + B2


def print_accuracy(sx, sy, tf_session):
"""This function prints the accuracy of a model at the time of invocation
:return: None
"""
correct_prediction = tf.equal(tf.argmax(y), tf.argmax(tf.nn.softmax(build_graph())))
correct_prediction_float = tf.cast(correct_prediction, dtype=tf.float32)
accuracy = tf.reduce_mean(correct_prediction_float)

print(accuracy.eval(feed_dict={x: sx, y: sy}, session=tf_session))


y_predicted = build_graph()

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_predicted))

model = tf.train.GradientDescentOptimizer(0.03).minimize(cross_entropy)

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(1000):
batch_x, batch_y = mnist.train.next_batch(50)
if _ % 100 == 0:
print_accuracy(batch_x, batch_y, sess)
sess.run(model, feed_dict={x: batch_x, y: batch_y})

预期的输出应该比仅单层(假设 W0 的形状为 [784,10] 且 B0 的形状为 [10])更好

def build_graph():
return tf.matmul(x,W0) + B0

相反,输出表明网络根本没有训练。在任何迭代中准确度都没有超过 20%。

输出

Extracting MNIST_data/train-images-idx3-ubyte.gz

Extracting MNIST_data/train-labels-idx1-ubyte.gz

Extracting MNIST_data/t10k-images-idx3-ubyte.gz

Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

0.1

0.1

0.1

0.1

0.1

0.1

0.1

0.1

0.1

0.1

我的问题

上面的程序根本没有泛化有什么问题?如何在不使用卷积神经网络的情况下进一步改进它?

最佳答案

你的主要错误是network symmetry ,因为您将所有 权重初始化为零。因此,权重永远不会更新。将其更改为小的随机数,它将开始学习。可以用零初始化偏差。

另一个问题是纯技术性的:print_accuracy 函数正在计算图中创建新节点,由于您在循环中调用它,该图会变得臃肿并最终会耗尽所有内存。

您可能还想尝试使用超参数并扩大网络以增加其容量。

编辑:我还在您的准确度计算中发现了一个错误。应该是

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_predicted, 1))

完整代码如下:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

x = tf.placeholder(tf.float32, [None, 784], 'images')
y = tf.placeholder(tf.float32, [None, 10], 'labels')

W0 = tf.Variable(dtype=tf.float32, name='InputLayerWeights', initial_value=tf.truncated_normal([784, 16]) * 0.001)
W1 = tf.Variable(dtype=tf.float32, name='HiddenLayer1Weights', initial_value=tf.truncated_normal([16, 16]) * 0.001)
W2 = tf.Variable(dtype=tf.float32, name='HiddenLayer2Weights', initial_value=tf.truncated_normal([16, 10]) * 0.001)

B0 = tf.Variable(dtype=tf.float32, name='HiddenLayer1Biases', initial_value=tf.ones([16]))
B1 = tf.Variable(dtype=tf.float32, name='HiddenLayer2Biases', initial_value=tf.ones([16]))
B2 = tf.Variable(dtype=tf.float32, name='OutputLayerBiases', initial_value=tf.ones([10]))

A1 = tf.nn.relu(tf.matmul(x, W0) + B0)
A2 = tf.nn.relu(tf.matmul(A1, W1) + B1)
y_predicted = tf.matmul(A2, W2) + B2
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_predicted, 1))
correct_prediction_float = tf.cast(correct_prediction, dtype=tf.float32)
accuracy = tf.reduce_mean(correct_prediction_float)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_predicted))
optimizer = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)

mnist = input_data.read_data_sets('mnist', one_hot=True)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch_x, batch_y = mnist.train.next_batch(64)
_, cost_val, acc_val = sess.run([optimizer, cross_entropy, accuracy], feed_dict={x: batch_x, y: batch_y})
if i % 100 == 0:
print('cost=%.3f accuracy=%.3f' % (cost_val, acc_val))

关于python - 为什么这个神经网络什么也没学到?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48027263/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com