gpt4 book ai didi

python - 使用单输出神经元tensorflow的神经网络时loss和accuracy都是0

转载 作者:行者123 更新时间:2023-11-28 18:20:24 25 4
gpt4 key购买 nike

我正在为特定任务编写一个二元分类器,而不是在输出层使用 2 个神经元,我只想使用一个带有 S 形函数的神经元,如果它低于 0.5,则基本上输出 0 类,否则输出 1 类。

图像被加载、调整为 64x64 并展平,以创建问题的复制品)。数据加载的代码将出现在最后。我创建占位符。

x = tf.placeholder('float',[None, 64*64])
y = tf.placeholder('float',[None, 1])

并定义模型如下。

def create_model_linear(data):

fcl1_desc = {'weights': weight_variable([4096,128]), 'biases': bias_variable([128])}
fcl2_desc = {'weights': weight_variable([128,1]), 'biases': bias_variable([1])}

fc1 = tf.nn.relu(tf.matmul(data, fcl1_desc['weights']) + fcl1_desc['biases'])
fc2 = tf.nn.sigmoid(tf.matmul(fc1, fcl2_desc['weights']) + fcl2_desc['biases'])

return fc2

weight_variablebias_variable 函数只返回给定形状的 tf.Variable()。 (他们的代码也在最后。)

然后我定义训练函数如下。

def train(x, hm_epochs):
prediction = create_model_linear(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits = prediction, labels = y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
batch_size = 100
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

for epoch in range(hm_epochs):
epoch_loss = 0
i = 0
while i < len(train_x):
start = i
end = i + batch_size
batch_x = train_x[start:end]
batch_y = train_y[start:end]
_, c = sess.run([optimizer, cost], feed_dict = {x:batch_x, y:batch_y})

epoch_loss += c
i+=batch_size

print('Epoch', epoch+1, 'completed out of', hm_epochs,'loss:',epoch_loss)
correct = tf.greater(prediction,[0.5])
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
i = 0
acc = []
while i < len(train_x):
acc +=[accuracy.eval({x:train_x[i:i+1000], y:train_y[i:i + 1000]})]
i+=1000
print sum(acc)/len(acc)

train(x, 10) 的输出是

('Epoch', 1, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 2, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 3, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 4, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 5, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 6, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 7, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 8, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 9, 'completed out of', 10, 'loss:', 0.0) ('Epoch', 10, 'completed out of', 10, 'loss:', 0.0)

0.0 What am I missing?

这里是所有实用函数的 promise 代码:

def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)

def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)

def getLabel(wordlabel):
if wordlabel == 'Class_A':
return [1]
elif wordlabel == 'Class_B':
return [0]
else:
return -1

def loadImages(pathToImgs):
images = []
labels = []
filenames = os.listdir(pathToImgs)
imgCount = 0
for i in tqdm(filenames):
wordlabel = i.split('_')[1]
oneHotLabel = getLabel(wordlabel)
img = cv2.imread(pathToImgs + i,cv2.IMREAD_GRAYSCALE)
if oneHotLabel != -1 and type(img) is np.ndarray:
images += [cv2.resize(img,(64,64)).flatten()]
labels += [oneHotLabel]
imgCount+=1
print imgCount
return (images,labels)

最佳答案

我认为您应该使用 tf.nn.sigmoid_cross_entropy_with_logits 而不是 tf.nn.softmax_cross_entropy_with_logits 因为您在输出层中使用了 sigmoid 和 1 个神经元。

您还需要从 create_model_linear 中的最后一层移除 sigmoid并且,您没有使用 y 标签,准确性必须采用以下形式。

correct = tf.equal(tf.greater(tf.nn.sigmoid(prediction),[0.5]),tf.cast(y,'bool'))

关于python - 使用单输出神经元tensorflow的神经网络时loss和accuracy都是0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45459042/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com