- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
前几天我发布了类似的问题here ,但此后我对发现的错误进行了编辑,但错误预测的问题仍然存在。
我有两个网络——一个有 3 个转换层,另一个有 3 个转换层,后跟 3 个反转换层。两者都采用 200x200 输入图像。输出的分辨率相同,均为 200x200,但它有两个分类(要么是 0,要么是 1——这是一个分割网络),因此网络预测尺寸为 200x200x2(加上 batch_size)。我们来谈谈带有反卷积层的网络。
这是奇怪的事情......在 10 次训练中,也许其中 3 次会收敛。其他 7 个的精度将下降到 0.0。
卷积层和反卷积层由 ReLu 激活。优化器做了一些奇怪的事情。当我在每次训练迭代后打印预测时,值的大小一开始就很大——考虑到它们都是通过 ReLu 传递的,这是正确的——但在每次迭代之后,值会变小,直到它们大致在 0 到 2 之间。随后我通过 sigmoid 函数 (sigmoid_cross_entropy_wight_logits
) 传递它们——从而将大的负值压缩为 0,将大的正值压缩为 1。当我进行预测时,我通过将它们传递给 sigmoid 函数来重新激活输出再次。
所以在第一次迭代之后,预测值是合理的......
Accuracy = 0.508033
[[[[ 1. 0.]
[ 0. 1.]
[ 0. 0.]
...,
[ 1. 0.]
[ 1. 1.]
[ 1. 0.]]
[[ 0. 1.]
[ 1. 1.]
[ 0. 0.]
...,
[ 1. 1.]
[ 1. 1.]
[ 0. 1.]]
但是经过一些迭代之后,假设这次它实际上收敛了,预测值看起来像......(因为优化器使输出变小,它们都位于 sigmoid 函数的奇怪中间位置)
[[ 0.51028508 0.63202268]
[ 0.24386917 0.52015287]
[ 0.62086064 0.6953823 ]
...,
[ 0.2593964 0.13163178]
[ 0.24617286 0.5210492 ]
[ 0.24692698 0.5876413 ]]]]
Accuracy = 0.999913
我的优化器函数是否错误?
这是整个代码...跳转到 def conv_net
以查看网络创建...该函数之后是成本函数、优化器和准确性的定义。您会注意到,当我测量准确性并进行预测时,我使用 tf.nn.sigmoid(pred) 重新激活输出 - 这是因为成本函数 sigmoid_cross_entropy_with_logits 结合了激活以及同一函数中的损失。换句话说,pred
(网络)输出一个线性值。
import tensorflow as tf
import pdb
import numpy as np
from numpy import genfromtxt
from PIL import Image
# Parameters
learning_rate = 0.001
training_iters = 10000
batch_size = 10
display_step = 1
# Network Parameters
n_input = 200 # MNIST data input (img shape: 28*28)
n_output = 40000
n_classes = 2 # MNIST total classes (0-9 digits)
#n_input = 200
dropout = 0.75 # Dropout, probability to keep units
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input, n_input])
y = tf.placeholder(tf.float32, [None, n_input, n_input, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)
def convert_to_2_channel(x, batch_size):
#assume input has dimension (batch_size,x,y)
#output will have dimension (batch_size,x,y,2)
output = np.empty((batch_size, 200, 200, 2))
temp_arr1 = np.empty((batch_size, 200, 200))
temp_arr2 = np.empty((batch_size, 200, 200))
for i in xrange(batch_size):
for j in xrange(3):
for k in xrange(3):
if x[i][j][k] == 1:
temp_arr1[i][j][k] = 1
temp_arr2[i][j][k] = 0
else:
temp_arr1[i][j][k] = 0
temp_arr2[i][j][k] = 1
for i in xrange(batch_size):
for j in xrange(200):
for k in xrange(200):
for l in xrange(2):
if l == 0:
output[i][j][k][l] = temp_arr1[i][j][k]
else:
output[i][j][k][l] = temp_arr2[i][j][k]
return output
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 200, 200, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
#conv1 = tf.nn.local_response_normalization(conv1)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
#conv2 = tf.nn.local_response_normalization(conv2)
conv2 = maxpool2d(conv2, k=2)
# Convolution Layer
conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
# # Max Pooling (down-sampling)
#conv3 = tf.nn.local_response_normalization(conv3)
conv3 = maxpool2d(conv3, k=2)
temp_batch_size = tf.shape(x)[0]
output_shape = [temp_batch_size, 50, 50, 64]
conv4 = tf.nn.conv2d_transpose(conv3, weights['wdc1'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv4 = tf.nn.bias_add(conv4, biases['bdc1'])
conv4 = tf.nn.relu(conv4)
# conv4 = tf.nn.local_response_normalization(conv4)
# output_shape = tf.pack([temp_batch_size, 100, 100, 32])
output_shape = [temp_batch_size, 100, 100, 32]
conv5 = tf.nn.conv2d_transpose(conv4, weights['wdc2'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv5 = tf.nn.bias_add(conv5, biases['bdc2'])
conv5 = tf.nn.relu(conv5)
# conv5 = tf.nn.local_response_normalization(conv5)
# output_shape = tf.pack([temp_batch_size, 200, 200, 1])
output_shape = [temp_batch_size, 200, 200, 2]
conv6 = tf.nn.conv2d_transpose(conv5, weights['wdc3'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv6 = tf.nn.bias_add(conv6, biases['bdc3'])
conv6 = tf.nn.relu(conv6)
# pdb.set_trace()
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv6, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
return (tf.add(tf.matmul(fc1, weights['out']), biases['out']))# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1' : tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2' : tf.Variable(tf.random_normal([5, 5, 32, 64])),
# 5x5 conv, 32 inputs, 64 outputs
'wc3' : tf.Variable(tf.random_normal([5, 5, 64, 128])),
'wdc1' : tf.Variable(tf.random_normal([2, 2, 64, 128])),
'wdc2' : tf.Variable(tf.random_normal([2, 2, 32, 64])),
'wdc3' : tf.Variable(tf.random_normal([2, 2, 2, 32])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([80000, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, 80000]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bc3': tf.Variable(tf.random_normal([128])),
'bdc1': tf.Variable(tf.random_normal([64])),
'bdc2': tf.Variable(tf.random_normal([32])),
'bdc3': tf.Variable(tf.random_normal([2])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([80000]))
}
# Construct model
pred = conv_net(x, weights, biases, keep_prob)
pred = tf.reshape(pred, [-1,n_input,n_input,n_classes])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
# cost = (tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(0,tf.cast(tf.sub(tf.nn.sigmoid(pred),y), tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
summary = tf.train.SummaryWriter('/tmp/logdir/', sess.graph)
step = 1
from tensorflow.contrib.learn.python.learn.datasets.scroll import scroll_data
data = scroll_data.read_data('/home/kendall/Desktop/')
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = data.train.next_batch(batch_size)
# Run optimization op (backprop)
batch_x = batch_x.reshape((batch_size, n_input, n_input))
batch_y = batch_y.reshape((batch_size, n_input, n_input))
batch_y = convert_to_2_channel(batch_y, batch_size) #converts the 200x200 ground truth to a 200x200x2 classification
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
keep_prob: dropout})
#measure prediction
prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: 1.})
print prediction
if step % display_step == 0:
# Calculate batch loss and accuracdef conv_net(x, weights, biases, dropout):
save_path = "model.ckpt"
saver.save(sess, save_path)
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: dropout})
print "Accuracy = " + str(acc)
if acc > 0.73:
break
step += 1
print "Optimization Finished!"
#make prediction
im = Image.open('/home/kendall/Desktop/HA900_frames/frame0035.tif')
batch_x = np.array(im)
# pdb.set_trace()
batch_x = batch_x.reshape((1, n_input, n_input))
batch_x = batch_x.astype(float)
pdb.set_trace()
prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: dropout})
print prediction
arr1 = np.empty((n_input,n_input))
arr2 = np.empty((n_input,n_input))
for i in xrange(n_input):
for j in xrange(n_input):
for k in xrange(2):
if k == 0:
arr1[i][j] = (prediction[0][i][j][k])
else:
arr2[i][j] = (prediction[0][i][j][k])
# prediction = np.asarray(prediction)
# prediction = np.reshape(prediction, (200,200))
# np.savetxt("prediction.csv", prediction, delimiter=",")
np.savetxt("prediction1.csv", arr1, delimiter=",")
np.savetxt("prediction2.csv", arr2, delimiter=",")
# np.savetxt("prediction2.csv", arr2, delimiter=",")
# Calculate accuracy for 256 mnist test images
print "Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: data.test.images[:256],
y: data.test.labels[:256],
keep_prob: 1.})
Correct_pred 变量(测量准确性的变量)是预测与真实值之间的简单减法运算符,然后与零进行比较(如果两者相等,则差异应为零)。
另外,我已经绘制了网络图,它对我来说看起来非常不合适。这是一张图片,我必须裁剪才能查看。
编辑:我发现为什么我的图表看起来很糟糕(感谢奥利维尔),我也尝试改变我的损失函数,但没有结果——它仍然在同一个庄园发散
with tf.name_scope("loss") as scope:
# cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
temp_pred = tf.reshape(pred, [-1, 2])
temp_y = tf.reshape(y, [-1, 2])
cost = (tf.nn.softmax_cross_entropy_with_logits(temp_pred, temp_y))
编辑完整代码现在看起来像这样(仍然有分歧)
import tensorflow as tf
import pdb
import numpy as np
from numpy import genfromtxt
from PIL import Image
# Parameters
learning_rate = 0.001
training_iters = 10000
batch_size = 10
display_step = 1
# Network Parameters
n_input = 200 # MNIST data input (img shape: 28*28)
n_output = 40000
n_classes = 2 # MNIST total classes (0-9 digits)
#n_input = 200
dropout = 0.75 # Dropout, probability to keep units
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input, n_input])
y = tf.placeholder(tf.float32, [None, n_input, n_input, n_classes])
keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)
def convert_to_2_channel(x, batch_size):
#assume input has dimension (batch_size,x,y)
#output will have dimension (batch_size,x,y,2)
output = np.empty((batch_size, 200, 200, 2))
temp_arr1 = np.empty((batch_size, 200, 200))
temp_arr2 = np.empty((batch_size, 200, 200))
for i in xrange(batch_size):
for j in xrange(3):
for k in xrange(3):
if x[i][j][k] == 1:
temp_arr1[i][j][k] = 1
temp_arr2[i][j][k] = 0
else:
temp_arr1[i][j][k] = 0
temp_arr2[i][j][k] = 1
for i in xrange(batch_size):
for j in xrange(200):
for k in xrange(200):
for l in xrange(2):
if l == 0:
output[i][j][k][l] = temp_arr1[i][j][k]
else:
output[i][j][k][l] = temp_arr2[i][j][k]
return output
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 200, 200, 1])
with tf.name_scope("conv1") as scope:
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
#conv1 = tf.nn.local_response_normalization(conv1)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
with tf.name_scope("conv2") as scope:
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
#conv2 = tf.nn.local_response_normalization(conv2)
conv2 = maxpool2d(conv2, k=2)
# Convolution Layer
with tf.name_scope("conv3") as scope:
conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
# # Max Pooling (down-sampling)
#conv3 = tf.nn.local_response_normalization(conv3)
conv3 = maxpool2d(conv3, k=2)
temp_batch_size = tf.shape(x)[0]
with tf.name_scope("deconv1") as scope:
output_shape = [temp_batch_size, 50, 50, 64]
conv4 = tf.nn.conv2d_transpose(conv3, weights['wdc1'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv4 = tf.nn.bias_add(conv4, biases['bdc1'])
conv4 = tf.nn.relu(conv4)
# conv4 = tf.nn.local_response_normalization(conv4)
with tf.name_scope("deconv2") as scope:
# output_shape = tf.pack([temp_batch_size, 100, 100, 32])
output_shape = [temp_batch_size, 100, 100, 32]
conv5 = tf.nn.conv2d_transpose(conv4, weights['wdc2'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv5 = tf.nn.bias_add(conv5, biases['bdc2'])
conv5 = tf.nn.relu(conv5)
# conv5 = tf.nn.local_response_normalization(conv5)
with tf.name_scope("deconv3") as scope:
# output_shape = tf.pack([temp_batch_size, 200, 200, 1])
output_shape = [temp_batch_size, 200, 200, 2]
conv6 = tf.nn.conv2d_transpose(conv5, weights['wdc3'], output_shape=output_shape, strides=[1,2,2,1], padding="VALID")
conv6 = tf.nn.bias_add(conv6, biases['bdc3'])
# conv6 = tf.nn.relu(conv6)
# pdb.set_trace()
conv6 = tf.nn.dropout(conv6, dropout)
return conv6
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
# fc1 = tf.reshape(conv6, [-1, weights['wd1'].get_shape().as_list()[0]])
# fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
# fc1 = tf.nn.relu(fc1)
# # Apply Dropout
# fc1 = tf.nn.dropout(fc1, dropout)
#
# return (tf.add(tf.matmul(fc1, weights['out']), biases['out']))# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1' : tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2' : tf.Variable(tf.random_normal([5, 5, 32, 64])),
# 5x5 conv, 32 inputs, 64 outputs
'wc3' : tf.Variable(tf.random_normal([5, 5, 64, 128])),
'wdc1' : tf.Variable(tf.random_normal([2, 2, 64, 128])),
'wdc2' : tf.Variable(tf.random_normal([2, 2, 32, 64])),
'wdc3' : tf.Variable(tf.random_normal([2, 2, 2, 32])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([80000, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, 80000]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bc3': tf.Variable(tf.random_normal([128])),
'bdc1': tf.Variable(tf.random_normal([64])),
'bdc2': tf.Variable(tf.random_normal([32])),
'bdc3': tf.Variable(tf.random_normal([2])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([80000]))
}
# Construct model
# with tf.name_scope("net") as scope:
pred = conv_net(x, weights, biases, keep_prob)
pred = tf.reshape(pred, [-1,n_input,n_input,n_classes])
# Define loss and optimizer
with tf.name_scope("loss") as scope:
# cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
temp_pred = tf.reshape(pred, [-1, 2])
temp_y = tf.reshape(y, [-1, 2])
cost = (tf.nn.softmax_cross_entropy_with_logits(temp_pred, temp_y))
with tf.name_scope("opt") as scope:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
with tf.name_scope("acc") as scope:
correct_pred = tf.equal(0,tf.cast(tf.sub(tf.nn.softmax(temp_pred),y), tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
saver = tf.train.Saver()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
summary = tf.train.SummaryWriter('/tmp/logdir/', sess.graph)
step = 1
from tensorflow.contrib.learn.python.learn.datasets.scroll import scroll_data
data = scroll_data.read_data('/home/kendall/Desktop/')
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = data.train.next_batch(batch_size)
# Run optimization op (backprop)
batch_x = batch_x.reshape((batch_size, n_input, n_input))
batch_y = batch_y.reshape((batch_size, n_input, n_input))
batch_y = convert_to_2_channel(batch_y, batch_size) #converts the 200x200 ground truth to a 200x200x2 classification
batch_y = batch_y.reshape(batch_size * n_input * n_input, 2)
sess.run(optimizer, feed_dict={x: batch_x, temp_y: batch_y,
keep_prob: dropout})
#measure prediction
prediction = sess.run(tf.nn.softmax(temp_pred), feed_dict={x: batch_x, keep_prob: dropout})
print prediction
if step % display_step == 0:
# Calculate batch loss and accuracdef conv_net(x, weights, biases, dropout):
save_path = "model.ckpt"
saver.save(sess, save_path)
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: dropout})
print "Accuracy = " + str(acc)
if acc > 0.73:
break
step += 1
print "Optimization Finished!"
#make prediction
im = Image.open('/home/kendall/Desktop/HA900_frames/frame0035.tif')
batch_x = np.array(im)
# pdb.set_trace()
batch_x = batch_x.reshape((1, n_input, n_input))
batch_x = batch_x.astype(float)
pdb.set_trace()
prediction = sess.run(tf.nn.sigmoid(pred), feed_dict={x: batch_x, keep_prob: dropout})
print prediction
arr1 = np.empty((n_input,n_input))
arr2 = np.empty((n_input,n_input))
for i in xrange(n_input):
for j in xrange(n_input):
for k in xrange(2):
if k == 0:
arr1[i][j] = (prediction[0][i][j][k])
else:
arr2[i][j] = (prediction[0][i][j][k])
# prediction = np.asarray(prediction)
# prediction = np.reshape(prediction, (200,200))
# np.savetxt("prediction.csv", prediction, delimiter=",")
np.savetxt("prediction1.csv", arr1, delimiter=",")
np.savetxt("prediction2.csv", arr2, delimiter=",")
# np.savetxt("prediction2.csv", arr2, delimiter=",")
# Calculate accuracy for 256 mnist test images
print "Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: data.test.images[:256],
y: data.test.labels[:256],
keep_prob: 1.})
最佳答案
反卷积的概念是输出与输入大小相同的东西。
在线:
conv6 = tf.nn.bias_add(conv6, biases['bdc3'])
您的输出形状为 [batch_size, 200, 200, 2]
,因此您不需要添加完全连接的层。只需返回conv6
(没有最终的 ReLU)。
如果您在预测中使用 2 个类别,而真实标签 y
,您需要使用tf.nn.softmax_cross_entropy_with_logits()
,而不是 sigmoid 交叉熵。
确保 y
始终具有如下值:y[i, j] = [0., 1.]
或y[i, j] = [1., 0.]
pred = conv_net(x, weights, biases, keep_prob) # NEW prediction conv6
pred = tf.reshape(pred, [-1, n_classes])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
<小时/>
如果您希望 TensorBoard 图表看起来不错(或至少可读),请确保使用 tf.name_scope()
你的准确性也是错误的。您测量是否 softmax(pred)
和y
相等,但是 softmax(pred)
永远不能等于0.
或1.
,因此您的准确度将为 0.
.
这是你应该做的:
with tf.name_scope("acc") as scope:
correct_pred = tf.equal(tf.argmax(temp_pred, 1), tf.argmax(temp_y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
<小时/>
真正的错误是 convert_to_2_channel
中的拼写错误。 ,在循环中
for j in xrange(3):
应该是 200 而不是 3。
教训:调试时,用非常简单的示例逐步打印所有内容,您会发现有问题的函数返回错误的输出。
关于python - tensorflow 收敛但预测不佳,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37973619/
我是一名优秀的程序员,十分优秀!