- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我一整天都在解决这个问题,但我无法在网上找到可能的解决方案。
我正在编写一个卷积神经网络来对一些黑白图像进行分类。我首先读取数据,准备网络架构,然后运行训练部分,但在尝试训练时总是遇到此错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
[[Node: Slice_1 = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_2, Slice_1/begin, Slice_1/size)]]
我在网上找不到任何有帮助的东西。我不知道出了什么问题。非常感谢你们,我链接了下面的整个代码:
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
import random
"""
N_CLASSES, number of classes of the dataset, 2 classes, one for an error and the other one if it is ok
BATH SIZE, is going to depend on the number of samples taht we have
IMAGE_HEIGHT, height of the images
IMAGE_WIDTH, width of the images
TOTAL_SIZE, total size of the image
"""
N_CLASSES = 2
BATCH_SIZE = 5
NUM_CHANNELS = 1
IMAGE_HEIGHT = 696
IMAGE_WIDTH = 1024
TOTAL_SIZE = 1024*696
x = tf.placeholder(tf.float32, [None, None, 1])
y = tf.placeholder(tf.int32)
# Keep rate will do 0.6
keep_rate = 0.6
keep_prob = tf.placeholder(tf.float32)
""" Function for encoding the label from string to int"""
def encode_label(label):
return int(label)
""" Function for reading a label file separeted by ,
F.E: /home/pacocp/dataset/image1.jpg,1
"""
def read_label_file(file):
f = open(file)
filepaths = []
labels = []
for line in f:
filepath, label = line.split(",")
filepaths.append(filepath)
labels.append(encode_label(label))
return filepaths, labels
"""This function is going to load the SEM images """
def load_images(dataset_path,test_labels_file,train_labels_file):
# reading labels and file path
train_filepaths, train_labels = read_label_file(dataset_path + train_labels_file)
test_filepaths, test_labels = read_label_file(dataset_path + test_labels_file)
"""
# transform relative path into full path
train_filepaths = [ dataset_path + fp for fp in train_filepaths]
test_filepaths = [ dataset_path + fp for fp in test_filepaths]
"""
# for this example we will create or own test partition
all_filepaths = train_filepaths + test_filepaths
all_labels = train_labels + test_labels
# convert string into tensors
all_images = ops.convert_to_tensor(all_filepaths, dtype=dtypes.string)
all_labels = ops.convert_to_tensor(all_labels, dtype=dtypes.int32)
#now, we are going to create a partition vector
test_set_size = 5
partitions = [0] * len(all_filepaths)
partitions[:test_set_size] = [1] * test_set_size
random.shuffle(partitions)
# partition our data into a test and train set according to our partition vector
train_images, test_images = tf.dynamic_partition(all_images, partitions, 2)
train_labels, test_labels = tf.dynamic_partition(all_labels, partitions, 2)
# create input queues
train_input_queue = tf.train.slice_input_producer(
[train_images, train_labels],
shuffle=False)
test_input_queue = tf.train.slice_input_producer(
[test_images, test_labels],
shuffle=False)
# process path and string tensor into an image and a label
file_content = tf.read_file(train_input_queue[0])
train_image = tf.image.decode_jpeg(file_content, channels=NUM_CHANNELS) #You have to change this line depending on the image format
train_label = train_input_queue[1]
file_content = tf.read_file(test_input_queue[0])
test_image = tf.image.decode_jpeg(file_content, channels=NUM_CHANNELS)
test_label = test_input_queue[1]
# define tensor shape
train_image.set_shape([IMAGE_HEIGHT,IMAGE_WIDTH,1])
test_image.set_shape([IMAGE_HEIGHT,IMAGE_WIDTH,1])
""" TEST FOR NOT USING BATCHES AND USING ALL THE IMAGES DIRECTLY
print("Here")
# collect batches of images before processing
train_image_batch, train_label_batch = tf.train.batch(
[train_image, train_label],
batch_size=BATCH_SIZE
#,num_threads=1
)
test_image_batch, test_label_batch = tf.train.batch(
[test_image, test_label],
batch_size=BATCH_SIZE
#,num_threads=1
)
return {'train_image_batch':train_image_batch, 'train_label_batch':train_label_batch,
'test_image_batch':test_image_batch, 'test_label_batch':test_label_batch}
"""
return {'train_image_batch':train_image, 'train_label_batch':train_label,
'test_image_batch':test_image, 'test_label_batch':test_label}
""" This is going to be used for creating the weights and the biases"""
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(data, weights):
return tf.nn.conv2d(data, weights, strides=[1,1,1,1], padding='SAME') # We are not going to get the depth
def maxpool2d(data):
"""Here we are going to move two by two at a time size of the window movement of the window"""
return tf.nn.max_pool(data, ksize=[1,2,2,1],strides=[1,2,2,1], padding = 'SAME')
def convolutional_neural_network(data):
"""Here we are going to create the weights and biases variables for generating our neural network"""
print("Creating first layer")
w_conv1 = weight_variable([15, 15, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(data, shape=[-1, 696, 1024, 1]) #Reshape the image, second and third elements
# are height and width, an the third dimension the colors channel
#First convolutional layer
h_conv1 = tf.nn.relu(conv2d(x_image,w_conv1 + b_conv1))
h_pool1 = maxpool2d(h_conv1)
print("Creating second layer")
w_conv2 = weight_variable([15, 15, 32, 64])
b_conv2 = bias_variable([64])
#Second convolutional layer
h_conv2 = tf.nn.relu(conv2d(h_pool1,w_conv2 + b_conv2))
h_pool2 = maxpool2d(h_conv2)
print("Craeating fully-conected layer")
w_fc1 = weight_variable([1024, 1024])
b_fc1 = bias_variable([1024])
#Final
h_pool2_flat = tf.reshape(h_pool2,[-1,1024])
fc = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1 )+ b_fc1)
"""The idea of dropout is for help us in a
bigger neural network, dropout is going to help fight
local minimuns"""
fc_dropout = tf.nn.dropout(fc, keep_rate) #Compute dropout
print("Creating output layer")
w_fc2 = weight_variable([1024, N_CLASSES])
b_fc2 = bias_variable([N_CLASSES])
#Final layer with a softmax
y = tf.matmul(fc_dropout, w_fc2)+ b_fc2
print("CNN created")
return y
'''Here is the main where we are going to train the convolutional neural network'''
#Here we read the images
dataset_path = "/media/datos/Dropbox/4ºaño/Image Analysis and Computer Vision/NanoFibers/DataSet/"
test_labels_file = "SEM_test_labels.txt"
train_labels_file = "SEM_train_labels.txt"
print("Loading the images...")
train_and_test_sets = load_images(dataset_path,test_labels_file,train_labels_file)
print("Images loaded sucessfully!")
#Now, I'm going to save some things in variables for a clearer code
train_image_batch = train_and_test_sets['train_image_batch']
train_label_batch = train_and_test_sets['train_label_batch']
test_image_batch = train_and_test_sets['test_image_batch']
test_label_batch = train_and_test_sets['test_label_batch']
"""THIS IS FOR SHOWING THE SETS, JUST FOR DEBBUGING
with tf.Session() as sess:
# initialize the variables
sess.run(tf.global_variables_initializer())
# initialize the queue threads to start to shovel data
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print("from the train set:")
for i in range(31):
print(sess.run(train_image_batch))
print("from the test set:")
for i in range(11):
print(sess.run(test_label_batch))
# stop our queue threads and properly close the session
coord.request_stop()
coord.join(threads)
sess.close()
"""
sess = tf.Session()
#Now, I'm going to save some things in variables for a clearer code
train_image_batch = train_and_test_sets['train_image_batch']
train_label_batch = train_and_test_sets['train_label_batch']
test_image_batch = train_and_test_sets['test_image_batch']
test_label_batch = train_and_test_sets['test_label_batch']
sess = tf.InteractiveSession()
#Firstly we get the prediction
prediction = convolutional_neural_network(x)
#Cross Entropy is what we are going to try to reduce
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(prediction,1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#----------PROBLEM HERE
sess.run(tf.global_variables_initializer())
tf.train.start_queue_runners(sess=sess)
# Here it's where the train it's going to be made
train_images = sess.run(train_image_batch)
train_labels = sess.run(train_label_batch)
test_images = sess.run(test_image_batch)
test_labels = sess.run(test_label_batch)
with sess.as_default():
index_for_batch = 1
for i in range(50):
print("generating batches")
#batch_image = train_image_batch[index_for_batch].eval(session=sess)
#batch_label = train_label_batch[index_for_batch].eval(session=sess)
print("generated")
if (i%5 == 0) and (i != 0):
train_accuracy = accuracy.eval(feed_dict={
x:train_images, y: train_labels, keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
print("*********Doing training step***********")
train_step.run(feed_dict={x: train_images, y: train_labels, keep_prob: 0.5})
if(index_for_batch + 1 > len(train_image_batch)):
index_for_batch = 1
else:
index_for_batch = index_for_batch + 1
#Here we are gong to test the accuracy of the training
print("test accuracy %g"%accuracy.eval(feed_dict={
x: test_images, y: test_labels, keep_prob: 1.0}))
使用操作名称编辑错误
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _do_call
return fn(*args)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1003, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
[[Node: Slice_1 = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_2, Slice_1/begin, Slice_1/size)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "convolutional_net.py", line 295, in <module>
train_step.run(feed_dict={x: train_images, y: train_labels, keep_prob: 0.5})
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1449, in run
_run_using_default_session(self, feed_dict, self.graph, session)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3668, in _run_using_default_session
session.run(operation, feed_dict)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
[[Node: Slice_1 = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_2, Slice_1/begin, Slice_1/size)]]
Caused by op 'Slice_1', defined at:
File "convolutional_net.py", line 264, in <module>
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y))
File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 1443, in softmax_cross_entropy_with_logits
labels = _flatten_outer_dims(labels)
File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 1245, in _flatten_outer_dims
array_ops.shape(logits), [math_ops.sub(rank, 1)], [1])
File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 484, in slice
return gen_array_ops._slice(input_, begin, size, name=name)
File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2868, in _slice
name=name)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
op_def=op_def)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
[[Node: Slice_1 = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_2, Slice_1/begin, Slice_1/size)]]
最佳答案
通过阅读代码,问题源于 tf.nn.softmax_cross_entropy_with_logits()
的参数形状。 。根据文档:
logits
andlabels
must have the same shape[batch_size, num_classes]
and the same dtype (eitherfloat16
,float32
, orfloat64
).
您的代码调用tf.nn.softmax_cross_entropy_with_logits(prediction, y)
,所以让我们看看参数的形状:
预测
是从卷积神经网络(x)
返回的值,其形状为[batch_size, N_CLASSES]
。 (x
的占位符将 batch_size
表示为 None
,因此它可以是动态的。)
y
定义为 y = tf.placeholder(tf.int32)
。占位符没有形状信息,因此它是静态未知的(这部分解释了糟糕的错误消息......有关更多信息,请参见下文)。为了找出 y 的实际形状,我们可以看看如何提供占位符,看起来您使用从输入文件解析的整数列表来提供它,其中整数表示相应示例的真实标签。
要解决此问题,您应该将 tf.nn.softmax_cross_entropy_with_logits()
替换为其稀疏对应项 tf.nn.sparse_softmax_cross_entropy_with_logits()
,它可以处理您正在使用的格式的输入数据:
cross_entropy = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(prediction, y))
(另一种方法是使用 tf.one_hot(y, N_CLASSES)
将 y
转换为 tf.nn.softmax_cross_entropy_with_logits()
的适当单热编码,但这可以效率较低,因为它必须为目标值具体化一个可能很大的矩阵。)
请注意,您收到此问题的运行时错误的原因是 tf.placeholder()
的方式所致。 for y 已定义,没有静态形状。如果将其定义为向量,则在图构建时会出现错误:
# `y` is a (variable-length) vector.
y = tf.placeholder(tf.int32, shape=[None])
关于python - 进行 train_step.run() 时出现 Tensorflow 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41618564/
我的应用程序从一个有 5 个选项卡的选项卡栏 Controller 开始。一开始,第一个出现了它的名字,但其他四个没有名字,直到我点击它们。然后根据用户使用的语言显示名称。如何在选项卡栏出现之前设置选
我有嵌套数组 json 对象(第 1 层、第 2 层和第 3 层)。我的问题是数据表没有出现。任何相关的 CDN 均已导入。该表仅显示部分。我引用了很多网站,但都没有解决我的问题。 之前我使用标准表来
我正在尝试设置要显示的 Parse PFLoginViewController。这是我的一个 View Controller 的类。 import UIKit import Parse import
我遇到了这个问题,我绘制的对象没有出现在 GUI 中。我知道它正在被处理,因为数据被推送到日志文件。但是,图形没有出现。 这是我的一些代码: public static void main(Strin
我有一个树状图,其中包含出现这样的词...... TreeMap occurrence = new TreeMap (); 字符串 = 单词 整数 = 出现次数。 我如何获得最大出现次数 - 整数,
因此,我提示用户输入变量。如果变量小于 0 且大于 10。如果用户输入 10,我想要求用户再次输入数字。我问时间的时候输入4,它说你输入错误。但在第二次尝试时效果很好。例如:如果我输入 25,它会打印
我已经用 css overflow 属性做了一个例子。在这个例子中我遇到了一个溢出滚动的问题。滚动条出现了,但没有工作意味着每当将光标移动到滚动条时,在这个滚动条不活动的时间。我对此一无所知,所以请帮
我现在正在做一个元素。当您单击一个元素时,会出现以下信息,我想知道如何在您单击下一个元素而不重新单击同一元素时使其消失....例如,我的元素中有披萨,我想单击肉披萨看到浇头然后点击奶酪披萨看到浇头和肉
我有一个路由器模块,它将主题与正则表达式进行比较,并将出现的事件与一致的键掩码链接起来。 (它是一个简单的 url 路由过滤,如 symfony http://symfony.com/doc/curr
这个问题在这里已经有了答案: 9年前关闭。 Possible Duplicate: mysql_fetch_array() expects parameter 1 to be resource, bo
我在底部有一个带有工具栏的 View ,我正在使用 NavigationLink 导航到该 View 。但是当 View 出现时,工具栏显示得有点太低了。大约半秒钟后,它突然跳到位。它只会在应用程序启
我试图在我的应用程序上为背景音乐添加一个 AVAudioPlayer,我正在主屏幕上启动播放器,尝试在应用程序打开时开始播放但出现意外行为... 它播放并立即不断创建新玩家并播放这些玩家,因此同时播放
这是获取一个数字,获取其阶乘并将其加倍,但是由于基本情况,如果您输入 0,它会给出 2 作为答案,因此为了绕过它,我使用了 if 语句,但收到错误输入“if”时解析错误。如果你们能提供帮助,我真的很感
暂停期间抛出异常 android.os.DeadObjectException 在 android.os.BinderProxy.transactNative( native 方法) 在 androi
我已经为猜词游戏编写了一些代码。它从用户输入中读取字符并在单词中搜索该字符;根据字符是否在单词中,程序返回并控制一些变量。 代码如下: import java.util.Random; import
我是自动化领域的新手。这是我的简单 TestNG 登录代码,当我以 TestNG 身份运行该代码时,它会出现 java.lang.NullPointerException,双击它会突出显示我导航到 U
我是c#程序员,我习惯了c#的封装语法和其他东西。但是现在,由于某些原因,我应该用java写一些东西,我现在正在练习java一天!我要创建一个为我自己创建一个虚拟项目,以便让自己更熟悉 Java 的
我正在使用 Intellij,我的源类是 main.com.coding,我的资源文件是 main.com.testing。我将 spring.xml 文件放入资源文件中。 我的测试类位于 test.
我想要我的tests folder separate到我的应用程序代码。我的项目结构是这样的 myproject/ myproject/ myproject.py moduleon
这个问题已经有答案了: What is a NullPointerException, and how do I fix it? (12 个回答) 已关闭 6 年前。 因此,我尝试比较 2 个值,一个
我是一名优秀的程序员,十分优秀!