- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我一直在尝试使用tensorflow实现cnn。我制作了与 Cifar10 几乎相同的数据集格式,但总共包含三个类。这是a link 并且还得到了this page的帮助。我的代码向我显示了此错误,但我无法调试它。请帮忙。 谢谢。
tensorflow/core/framework/op_kernel.cc:993] Invalid argument: Received a label value of 253 which is outside the valid range of [0, 3). Label values: 11 121 3 59 194 190 239 11 207 33 138 60 186 63 156 250 187 61 223 60 180 40 186 187 251 200 66 154 253 60 245 47 189 168 86 93 61 62 61 62 52 150 94 172 143 23 60 142 59 28 60 149 15 100 248 149 196 189 159 212 178 152 65 189 9 241 189 62 189 21 60 244 47 48 196 47 66 56 101 22 190 190 60 91 204 21 147 61 75 223 27 168 223 149 61 82 246 186 190 211 190 186 125 103 162 134 61 202 239 189 32 188 90 187 189 172 75 200 76 122 11 46 72 252 190 63 118 189
Traceback (most recent call last):
File "cifar10_train.py", line 127, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "cifar10_train.py", line 123, in main
train()
File "cifar10_train.py", line 115, in train
mon_sess.run(train_op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 462, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 786, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 744, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 891, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 744, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of 253 which is outside the valid range of [0, 3). Label values: 11 121 3 59 194 190 239 11 207 33 138 60 186 63 156 250 187 61 223 60 180 40 186 187 251 200 66 154 253 60 245 47 189 168 86 93 61 62 61 62 52 150 94 172 143 23 60 142 59 28 60 149 15 100 248 149 196 189 159 212 178 152 65 189 9 241 189 62 189 21 60 244 47 48 196 47 66 56 101 22 190 190 60 91 204 21 147 61 75 223 27 168 223 149 61 82 246 186 190 211 190 186 125 103 162 134 61 202 239 189 32 188 90 187 189 172 75 200 76 122 11 46 72 252 190 63 118 189
[[Node: cross_entropy_per_example/cross_entropy_per_example = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](softmax_linear/softmax_linear, Cast_4)]]
Caused by op u'cross_entropy_per_example/cross_entropy_per_example', defined at:
File "cifar10_train.py", line 127, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "cifar10_train.py", line 123, in main
train()
File "cifar10_train.py", line 75, in train
loss = cifar10.loss(logits, labels)
File "/home/nitakshi/Downloads/models-master/tutorials/image/cifar10/cifar10.py", line 309, in loss
labels=labels, logits=logits, name='cross_entropy_per_example')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1713, in sparse_softmax_cross_entropy_with_logits
precise_logits, labels, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 2378, in _sparse_softmax_cross_entropy_with_logits
features=features, labels=labels, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Received a label value of 253 which is outside the valid range of [0, 3). Label values: 11 121 3 59 194 190 239 11 207 33 138 60 186 63 156 250 187 61 223 60 180 40 186 187 251 200 66 154 253 60 245 47 189 168 86 93 61 62 61 62 52 150 94 172 143 23 60 142 59 28 60 149 15 100 248 149 196 189 159 212 178 152 65 189 9 241 189 62 189 21 60 244 47 48 196 47 66 56 101 22 190 190 60 91 204 21 147 61 75 223 27 168 223 149 61 82 246 186 190 211 190 186 125 103 162 134 61 202 239 189 32 188 90 187 189 172 75 200 76 122 11 46 72 252 190 63 118 189
[[Node: cross_entropy_per_example/cross_entropy_per_example = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](softmax_linear/softmax_linear, Cast_4)]]
Cifar 输入代码:
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#s
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Routine for decoding the CIFAR-10 binary file format."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
# Process images of this size. Note that this differs from the original CIFAR
# image size of 32 x 32. If one alters this number, then the entire model
# architecture will change and any model would need to be retrained.
IMAGE_SIZE = 32
# Global constants describing the CIFAR-10 data set.
NUM_CLASSES = 3
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000
#NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 2000
#NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 2000
def read_cifar10(filename_queue):
"""Reads and parses examples from CIFAR10 data files.
Recommendation: if you want N-way read parallelism, call this function
N times. This will give you N independent Readers reading different
files & positions within those files, which will give better mixing of
examples.
Args:
filename_queue: A queue of strings with the filenames to read from.
Returns:
An object representing a single example, with the following fields:
height: number of rows in the result (32)
width: number of columns in the result (32)
depth: number of color channels in the result (3)
key: a scalar string Tensor describing the filename & record number
for this example.
label: an int32 Tensor with the label in the range 0..9.
uint8image: a [height, width, depth] uint8 Tensor with the image data
"""
class CIFAR10Record(object):
pass
result = CIFAR10Record()
print(result)
# Dimensions of the images in the CIFAR-10 dataset.
# See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the
# input format.
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
image_bytes = result.height * result.width * result.depth
print('img bytes@@@@@@')
print(image_bytes)
# Every record consists of a label followed by the image, with a
# fixed number of bytes for each.
record_bytes = label_bytes + image_bytes
# Read a record, getting filenames from the filename_queue. No
# header or footer in the CIFAR-10 format, so we leave header_bytes
# and footer_bytes at their default of 0.
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
result.key, value = reader.read(filename_queue)
# Convert from a string to a vector of uint8 that is record_bytes long.
record_bytes = tf.decode_raw(value, tf.uint8)
# The first bytes represent the label, which we convert from uint8->int32.
#result.label = tf.cast(
# tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)
result.label = tf.cast(
tf.slice(record_bytes, [0], [label_bytes]), tf.int32)
print('result label############:')
print(result.label)
# The remaining bytes after the label represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
depth_major = tf.reshape(
tf.strided_slice(record_bytes, [label_bytes],
[label_bytes + image_bytes]),
[result.depth, result.height, result.width])
# Convert from [depth, height, width] to [height, width, depth].
result.uint8image = tf.transpose(depth_major, [1, 2, 0])
return result
def _generate_image_and_label_batch(image, label, min_queue_examples,
batch_size, shuffle):
"""Construct a queued batch of images and labels.
Args:
image: 3-D Tensor of [height, width, 3] of type.float32.
label: 1-D Tensor of type.int32
min_queue_examples: int32, minimum number of samples to retain
in the queue that provides of batches of examples.
batch_size: Number of images per batch.
shuffle: boolean indicating whether to use a shuffling queue.
Returns:
images: Images. 4D tensor of [batch_size, height, width, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
# Create a queue that shuffles the examples, and then
# read 'batch_size' images + labels from the example queue.
num_preprocess_threads = 16
if shuffle:
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
else:
images, label_batch = tf.train.batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
# Display the training images in the visualizer.
tf.summary.image('images', images)
print(images)
return images, tf.reshape(label_batch, [batch_size])
def distorted_inputs(data_dir, batch_size):
"""Construct distorted input for CIFAR training using the Reader ops.
Args:
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
#filenames = [os.path.join(data_dir, '28febtrain')
for i in xrange(1, 5)]
#filenames = [os.path.join(data_dir, '28febtrain')]
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
print('img height, width')
print(height)
print(width)
# Image processing for training the network. Note the many random
# distortions applied to the image.
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image,
max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image,
lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(distorted_image)
# Set the shapes of tensors.
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN *
min_fraction_of_examples_in_queue)
print ('Filling queue with %d CIFAR images before starting to train. '
'This will take a few minutes.' % min_queue_examples)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(float_image, read_input.label,
min_queue_examples, batch_size,
shuffle=True)
def inputs(eval_data, data_dir, batch_size):
"""Construct input for CIFAR evaluation using the Reader ops.
Args:
eval_data: bool, indicating if one should use the train or eval data set.
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
if not eval_data:
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
#filenames = [os.path.join(data_dir, '28febtrain')
for i in xrange(1, 6)]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
else:
filenames = [os.path.join(data_dir, '28febtrain')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for evaluation.
# Crop the central [height, width] of the image.
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image,
height, width)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(resized_image)
# Set the shapes of tensors.
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch *
min_fraction_of_examples_in_queue)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(float_image, read_input.label,
min_queue_examples, batch_size,
shuffle=False)
Cifar Train 代码与 github 中的代码相同。
代码结束该文件调用的其余函数与上面提供的 github 链接中的函数相同。事实真相是这样的:我只给出了文件行的几个值作为一个巨大的数据集。第一列显示标签值,其他是要处理的值0 -0.3056471033 -0.0466023552 0.0033290606 0.0116261395 -0.0136461613 0.0064174382 0.0084394668 0.0064852377 -0.0003195472 0. 0130523434 0.0081351981 -0.0041750822 -0.0044139047 0.0009210015 0.011423628 -0.0033359823 -0.0090784218 -0.0014336071 0.0029341 407 -0.0083200129 -0.0014352675 0.002385679 -0.0060231589 0.001362363 0.0051867442 0.001592935 0.014525627 0.0014239945 -0.0030 832436 0.0047563972 0.0008349333 -0.0040918221 -0.0061690423 0.009810869 -0.0006399579 -0.002112322 0.0028194289 -0.000801686 0.0012672692 -0.0028961465 0.0027815595 0.0007334416 0.001759698 0.00557 82681 -0.0137690884 0.0097706833 0.0119607859 -0.0056124537 -0.0073978555 0.0128119595 0.0083815554 ------ 等等这样的3072个值,第一个是标签
请帮忙。谢谢。
最佳答案
该回溯并没有多大帮助,但看起来您没有正确编辑代码以将其更改为 3 个类。例如,您将 num_classes 设置为 10。
如果您有新数据,那么您可能需要检查您的真实数据以确保其正确标记为 3 个类别。
关于python-2.7 - tensorflow /核心/框架/op_kernel.cc :993 Invalid argument: Received a label value of 253 which is outside the valid range of [0, 3),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43061664/
在whatsapp中,如果消息很短,文本和时间在同一行。如果消息很长,时间在右下角 - 上面的文字。 我如何在 Ios 中使用 Storyboard 实现此目的 最佳答案 尝试使用类似这样的方法来定义
我有这段代码: label.control-label{ font-weight: bold; } label.control-label::after{ content: ":";
尊敬的社区成员, 我想将测试中的文本放在 div 的中心。代码如下所示: Testing everything: 现在,如果我尝试以下代码部分: Testing everything: 它不会在
我有一个 DIV 元素,它有一个 并在其中输入文本框。 基本上,我在 DIV 元素上启用了 jQuery .resizable(),但是当您使 DIV 元素小于当前大小时,文本框会被推到新的一行。 我
请考虑以下标记。 This is a label 对我来说,这个标记是在我的自定义工具提示控件之后生成的。我在 IE 上的 JAWS 上看到的问题是它只读取“标题,而不是标签”,但是对于其他屏幕阅读
我正在按照文档使用 ionic 2 构建应用程序。我已经实现了一个带有 fab-list 的 fab 按钮。我试图在包含按钮旁边放置一个描述性标签。开箱即用的 ionic 2 似乎无法在 float
通常我使用标签标签来指向这样的输入标签 First Name: 现在我有了这个 First Name: 由于我以前没有穿过这样的东西,是否可以为 label 添加 label 标签。当我应用 Ja
我有一个包含换行符(“\r”)的传入文本字符串。 当我输出它时: System.out.println(myString) , 回车被解释。 但是,当我将字符串设置为标签的内容时,它会忽略回车。 如何
关闭。这个问题不满足Stack Overflow guidelines .它目前不接受答案。 想改善这个问题吗?更新问题,使其成为 on-topic对于堆栈溢出。 1年前关闭。 Improve thi
在 Excel 2013 中,我使用单元格中的值标记散点图。我希望标签不重叠。我可以手动移动标签,但我创建了一个过滤器来自动创建新绘图,因此我希望标签冲突也能自动发生。 这可能吗?无需 VBA 的解决
在我的 Struts2 JSP 中,我想显示一个 id,所以我写道: A${id}B ( A 和 B 用于调试) 我希望它显示为 Id:A7B 但 HTML 中生成了以下内容:A7BId: 为什么标签
我想要一个带注释的 AST,所以我定义了那些递归数据结构 使用 Fix : data Term a = Abstraction Name a | Application a a | Var
这两种方法都没有记录,并且似乎没有达到我的预期。 mylabel.setFontScale(3f); 使明显文本变大 3 倍(我正在寻找的),但与 Align.center 一起使用时无法正确居中>.
ScrollView里面有两个Label(多边的),下面是TableView(其中行数可能不同) Label 和 TableView 的高度都没有设置。 所有 outlet 都对彼此上方和下方的缩进设
我很好奇是否有一种简单的方法可以使标签采用 CSS 样式属性的默认值。我的复选框采用了我的选项卡的属性,我只希望它们成为默认值。正如您将看到的,我更改了复选框的字体大小,使其小于选项卡。但是,我不想仅
asp:label 和 html label 有什么区别? 我知道第一个是在服务器上呈现的,所以基本上它会返回一个跨度选项卡,但它有什么用呢?在什么情况下需要使用 HTML 标记,在什么情况下需要使用
我需要从网站中提取所有城市名称。我在以前的项目中使用了 beautifulSoup 和 RE,但在这个网站上,城市名称是常规文本的一部分,没有特定的格式。我找到了满足我要求的地理包 ( https:/
您好,我正在尝试添加 到表格的每个单元格。我在这里使用 Material 表:https://material-table.com/#/docs/features/component-overridi
我想制作一个简单的 R 图,y 轴标签位于 y 轴刻度标签上方。我用下面的代码创建了我喜欢的东西。但是它需要对 at 进行一些摸索。图形参数。 问:有没有更简单的方法来做到这一点?有没有办法查询 y
我可以绘制以下 df 的标签使用 geom_text : df 1 8 var 2 426 -276 hours worked per week N
我是一名优秀的程序员,十分优秀!