gpt4 book ai didi

python - Tensorflow 中的反向传播——准确性不会随着隐藏层大小的增加而提高

转载 作者:行者123 更新时间:2023-11-30 09:08:49 25 4
gpt4 key购买 nike

我正在使用 Tensorflow 站点的“MNIST for ML Beginners”代码模板来开发用于预测单个输出的网络。在大约 30,000 个训练集后,我将隐藏层大小从 10 更改为 100,测试精度没有变化(每种情况下为 0.14375)。我想知道我构建输入变量的方式或我实现代码的方式是否有问题。如果有人可以看一下,我将不胜感激:

import numpy as np
import tensorflow as tf
train_data = np.genfromtxt("PERSON1RATING_TRAINING.txt", delimiter=" ")
train_input = train_data[:, :10]
train_input = train_input.reshape(29440, 10)
X_train = tf.placeholder(tf.float32, [29440, 10])

train_target = train_data[:, 10]
train_target = train_target.reshape(29440, 1)
Y_train = tf.placeholder(tf.float32, [29440, 1])

test_data = np.genfromtxt("PERSON1RATING_TEST.txt", delimiter=" ")
test_input = test_data[:, :10]
test_input = test_input.reshape(5120, 10)
X_test = tf.placeholder(tf.float32, [5120, 10])

test_target = test_data[:, 10]
test_target = test_target.reshape(5120, 1)
Y_test = tf.placeholder(tf.float32, [5120, 1])

W_1 = tf.Variable(tf.zeros([10, 100]))
b = tf.Variable(tf.zeros([100]))
H = tf.nn.softmax(tf.matmul(X_train, W_1) + b)
H_test = tf.nn.softmax(tf.matmul(X_test, W_1) + b)

W_2 = tf.Variable(tf.zeros([100, 1]))
Y = tf.nn.softmax(tf.matmul(H, W_2))
Y_obt_test = tf.nn.softmax(tf.matmul(H_test, W_2))

cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y_train * tf.log(Y),
reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()

for _ in range(29440):
sess.run(train_step, feed_dict={X_train: train_input,
Y_train:train_target})

Y = tf.nn.sigmoid(Y)
correct_prediction = tf.equal(tf.round(Y_obt_test), Y_test)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={X_test : test_input, Y_test: test_target}))

最佳答案

希望这些能有所帮助

# use random_normal initializer for all weights, not biases
# W_1 = tf.Variable(tf.zeros([10, 100]))
W_1 = tf.Variable(tf.random_normal([10, 100]))

# No softmax for the training logits
# Y = tf.nn.softmax(tf.matmul(H, W_2))
Y = tf.matmul(H, W_2)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=Y, labels=Y_train)

关于python - Tensorflow 中的反向传播——准确性不会随着隐藏层大小的增加而提高,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45509872/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com