gpt4 book ai didi

python - cloudml 重新训练初始 - 收到的标签值超出有效范围

转载 作者:太空宇宙 更新时间:2023-11-03 15:47:59 39 4
gpt4 key购买 nike

我在追花tutorial用于在 google cloud ml 上重新训练 inception。我可以运行教程、训练、预测,一切都很好。

然后,我用花朵数据集替换了我自己的测试数据集。图像数字的光学字符识别。

enter image description here

训练模型时我收到错误:

Invalid argument: Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4

训练和评估数据集的格式如下所示:

root@e925cd9502c0:~/MeerkatReader/cloudML# head training_dataGCS.csv
gs://api-project-773889352370-ml/TrainingData/0_2.jpg,H
gs://api-project-773889352370-ml/TrainingData/0_4.jpg,One
gs://api-project-773889352370-ml/TrainingData/0_5.jpg,Five

字典文件看起来像这样

$ cat cloudML/dict.txt
Eight
F
Five
Forward_slash
Four
H
Nine
One
Seven
Six
Three
Two
Zero

我最初有像 1,2,3,4 和/这样的标签,但我将它们更改为字符串,以防它们是特殊字符(尤其是/)。我可以看到一条有点类似的消息 here ,但这与基于 0 的索引有关。

该消息的奇怪之处在于确实有 13 种标签类型。不知何故, tensorflow 只寻找 7 (0-6)。我的问题是,什么样的格式错误可能会使 tensorflow 认为标签数量较少。我可以确认测试数据和训练数据 80-20 分割都具有所有标签类别(尽管频率不同)。

我正在从 google 提供的最新 docker 版本运行。

`docker run -it -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash  gcr.io/cloud-datalab/datalab:local-20161227

我正在使用提交训练作业

  # Submit training job.
gcloud beta ml jobs submit training "$JOB_ID" \
--module-name trainer.task \
--package-path trainer \
--staging-bucket "$BUCKET" \
--region us-central1 \
-- \
--output_path "${GCS_PATH}/training" \
--eval_data_paths "${GCS_PATH}/preproc/eval*" \
--train_data_paths "${GCS_PATH}/preproc/train*"

完整错误:

Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4 [[Node: evaluate/xentropy/xentropy = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:master/replica:0/task:0/cpu:0"](final_ops/input/Wx_plus_b/fully_connected_1/BiasAdd, inputs/Squeeze)]] Caused by op u'evaluate/xentropy/xentropy', defined at: File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 545, in <module> tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 43, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 308, in main run(model, argv) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 439, in run dispatch(args, model, cluster, task) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 480, in dispatch Trainer(args, model, cluster, task).run_training() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 187, in run_training self.args.batch_size) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 278, in build_train_graph return self.build_graph(data_paths, batch_size, GraphMod.TRAIN) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 256, in build_graph loss_value = loss(logits, labels) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 396, in loss logits, labels, name='xentropy') File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1544, in sparse_softmax_cross_entropy_with_logits precise_logits, labels, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 2376, in _sparse_softmax_cross_entropy_with_logits features=features, labels=labels, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2238, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1130, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4 [[Node: evaluate/xentropy/xentropy = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:master/replica:0/task:0/cpu:0"](final_ops/input/Wx_plus_b/fully_connected_1/BiasAdd, inputs/Squeeze)]]

我的桶里的一切看起来都很好

enter image description here

并保存我的日志事件。

enter image description here

最佳答案

我认为您在提交训练作业时需要指定--label_count 13。该标志应位于 -- 之后的第二组标志中的 后面,因为它需要传递给您正在执行的代码,而不是 gcloud/Cloud ML。

问题是 TensorFlow 训练代码需要知道在开始逐步遍历数据之前要生成多少个输出 logits;因此它无法检查预处理步骤中的中间文件。

请告诉我这是否有帮助。

关于python - cloudml 重新训练初始 - 收到的标签值超出有效范围,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41601620/

39 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com