gpt4 book ai didi

python - 我每次在神经网络中的准确度都是 1.0

转载 作者:太空狗 更新时间:2023-10-30 01:19:08 28 4
gpt4 key购买 nike

我正在使用带有 numpy 和 tensorflow 的多层感知器进行二元分类。

输入矩阵的形状 = (9578,18)labels 的形状为 = (9578,1)

代码如下:

#preprocessing
input = np.loadtxt("input.csv", delimiter=",", ndmin=2).astype(np.float32)
labels = np.loadtxt("label.csv", delimiter=",", ndmin=2).astype(np.float32)

train_size = 0.9

train_cnt = floor(inp.shape[0] * train_size)
x_train = input[0:train_cnt]
y_train = labels[0:train_cnt]
x_test = input[train_cnt:]
y_test = labels[train_cnt:]

#defining parameters

learning_rate = 0.01
training_epochs = 100
batch_size = 50
n_classes = labels.shape[1]
n_samples = 9578
n_inputs = input.shape[1]
n_hidden_1 = 20
n_hidden_2 = 20

def multilayer_network(X,weights,biases,keep_prob):
'''
X: Placeholder for data inputs
weights: dictionary of weights
biases: dictionary of bias values

'''
#first hidden layer with sigmoid activation
# sigmoid(X*W+b)
layer_1 = tf.add(tf.matmul(X,weights['h1']),biases['h1'])
layer_1 = tf.nn.sigmoid(layer_1)
layer_1 = tf.nn.dropout(layer_1,keep_prob)

#second hidden layer
layer_2 = tf.add(tf.matmul(layer_1,weights['h2']),biases['h2'])
layer_2 = tf.nn.sigmoid(layer_2)
layer_2 = tf.nn.dropout(layer_2,keep_prob)

#output layer
out_layer = tf.matmul(layer_2,weights['out']) + biases['out']

return out_layer

#defining the weights and biases dictionary

weights = {
'h1': tf.Variable(tf.random_normal([n_inputs,n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2,n_classes]))
}

biases = {
'h1': tf.Variable(tf.random_normal([n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
keep_prob = tf.placeholder("float")


X = tf.placeholder(tf.float32,[None,n_inputs])
Y = tf.placeholder(tf.float32,[None,n_classes])

predictions = multilayer_network(X,weights,biases,keep_prob)

#cost function(loss) and optimizer function
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions,labels=Y))

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

#running the session
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)


#for loop

for epoch in range(training_epochs):
avg_cost = 0.0
total_batch = int(len(x_train) / batch_size)
x_batches = np.array_split(x_train, total_batch)
y_batches = np.array_split(y_train, total_batch)
for i in range(total_batch):
batch_x, batch_y = x_batches[i], y_batches[i]
_, c = sess.run([optimizer, cost],
feed_dict={
X: batch_x,
Y: batch_y,
keep_prob: 0.8
})
avg_cost += c / total_batch

print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))

print("Model has completed {} epochs of training".format(training_epochs))
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({X: x_test, Y: y_test,keep_probs=1.0}))

在我的模型运行 100 个 epoch 后,成本在每个 epoch 后都会降低,这意味着网络工作正常,但准确度每次都是 1.0,我不知道为什么,因为我是初学者它涉及神经网络及其功能。因此,我们将不胜感激任何帮助。谢谢!

编辑:我尝试在每个纪元之后检查预测矩阵,每次都得到全零。我在带有纪元的 for 循环中使用了以下代码来检查预测矩阵:

    for epoch in range(training_epochs):
avg_cost = 0.0
total_batch = int(len(x_train) / batch_size)
x_batches = np.array_split(x_train, total_batch)
y_batches = np.array_split(y_train, total_batch)
for i in range(total_batch):
batch_x, batch_y = x_batches[i], y_batches[i]
_, c,p = sess.run([optimizer, cost,predictions],
feed_dict={
X: batch_x,
Y: batch_y,
keep_prob: 0.8
})
avg_cost += c / total_batch

print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
y_pred = sess.run(tf.argmax(predictions, 1), feed_dict={X: x_test,keep_prob:1.0})
y_true = sess.run(tf.argmax(y_test, 1))
acc = sess.run(accuracy, feed_dict={X: x_test, Y: y_test,keep_prob:1.0})
print('Accuracy:', acc)
print ('---------------')
print(y_pred, y_true)

print("Model has completed {} epochs of training".format(training_epochs))

这是 1 个时期的输出:

Epoch: 0001 cost= 0.543714217
Accuracy: 1.0
---------------
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

最佳答案

您没有在预测上调用 sess.run。这意味着它目前是代表 tensorflow 的变量,而不是预测值。

_, c = sess.run([optimizer, cost], ...) 替换为 _, c, p = sess.run([optimizer, cost, predictions ], ...)。然后对您获得的 p 值进行 correct_prediction 计算。同样,真值是 batch_y,因为您的 Y 变量也是 tensorflow 图对象。因此,您现在将在 numpy 变量中工作,因此 argmax 调用应该使用 np 而不是 tf 完成。我相信这应该可以解决问题。

如果您想在 tensorflow 中执行此操作,请将正确的预测和准确度计算移至计算成本的位置,并将 sess.run 行更改为:_, c, a = sess.run([optimizer , 成本, 准确性], ...)

为了解释为什么你得到 100%,你有行 correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(Y, 1)) ,其中 predictionsY 都是 tensorflow 图变量。您可以将它们视为调用 sess.run() 时值将流经的位置的包装器。因此,当您打印精度时,您是在比较 tensorflow 图操作和 tensorflow 图操作,我猜后端将它们视为始终相等。

编辑:下面提到的两种不同方法的示例代码。不能 100% 确定它是否有效,因为我无法轻松测试它(我没有你的数据),但它应该是这样的。

第一种方法:

    _, c, p = sess.run([optimizer, cost, predictions], ...)
.
.
.
correct_prediction = np.equal(np.argmax(p, axis=1), np.argmax(batch_y, axis=1))
accuracy = np.mean(correct_prediction)

第二种方法:

cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions,labels=Y))
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
.
.
.
for i in range(total_batch):
batch_x, batch_y = x_batches[i], y_batches[i]
_, c, a = sess.run([optimizer, cost, accuracy],
feed_dict={
X: batch_x,
Y: batch_y,
keep_prob: 0.8
})
print(a)

编辑 2:虽然上述信息仍然正确,但还有另一个问题。当您只预测一个类别时,使用交叉熵和准确性没有意义。如果你在长度为 1 的东西上调用 argmax,那么你总是会得到 0,因为这是唯一存在的位置!准确性和交叉熵仅在基于类的预测的上下文中才有意义,在这种情况下,您的真值是类列表中的单热向量。

关于python - 我每次在神经网络中的准确度都是 1.0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49612619/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com