gpt4 book ai didi

python - Yolo Darkflow 错误。 tensorflow.python.framework.errors_impl.InvalidArgumentError : Invalid name

转载 作者:太空宇宙 更新时间:2023-11-03 21:08:59 27 4
gpt4 key购买 nike

当我运行代码时,我得到这个输出:

 %Run run_img.py
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
return f(*args, **kwds)
Traceback (most recent call last):
File "/home/pi/Desktop/darkflow-master/run_img.py", line 9, in <module>
from darkflow.net.build import TFNet
File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 5, in <module>
from .ops import op_create, identity
File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/__init__.py", line 1, in <module>
from .simple import *
File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/simple.py", line 1, in <module>
import tensorflow.contrib.slim as slim
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/__init__.py", line 40, in <module>
from tensorflow.contrib import distribute
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/__init__.py", line 33, in <module>
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
from tensorflow.contrib.tpu.python.ops import tpu_ops
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/__init__.py", line 69, in <module>
from tensorflow.contrib.tpu.python.ops.tpu_ops import *
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/ops/tpu_ops.py", line 39, in <module>
resource_loader.get_path_to_datafile("_tpu_ops.so"))
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/util/loader.py", line 56, in load_op_library
ret = load_library.load_op_library(path)
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid name:
An op that loads optimization parameters into HBM for embedding. Must be
preceded by a ConfigureTPUEmbeddingHost op that sets up the correct
embedding table configuration. For example, this op is used to install
parameters that are loaded from a checkpoint before a training loop is
executed.

parameters: A tensor containing the initial embedding table parameters to use in embedding
lookups using the Adagrad optimization algorithm.
accumulators: A tensor containing the initial embedding table accumulators to use in embedding
lookups using the Adagrad optimization algorithm.
table_name: Name of this table; must match a name in the
TPUEmbeddingConfiguration proto (overrides table_id).
num_shards: Number of shards into which the embedding tables are divided.
shard_id: Identifier of shard for this operation.
table_id: Index of this table in the EmbeddingLayerConfiguration proto
(deprecated).
(Did you use CamelCase?); in OpDef: name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { i: -1 } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" has_minimum: true minimum: -1 } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { s: "" } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } summary: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" is_stateful: true
>>>

我已经训练了自己的模型,并在 raspberry pi 3 model B 上运行它同样的代码在我的 Windows 机器上运行。它曾经在这个确切的树莓派上工作。我在中间闪过卡片。

我认为错误是在导入 darkflow.net.build 时出现的

我在 github 上克隆了最新的分支(3 月 16 日)并使用它构建了

python3 setup.py build_ext --inplace

我尝试运行的代码:

import cv2
from darkflow.net.build import TFNet
import numpy as np


from keras.models import load_model
model=load_model('custom-2/svhn-multi-digit-24-09-F1-ds.h5')

option = {
'model': 'custom-2/yolo-obj.cfg',
'load': 'custom-2/yolo-obj_2200.weights',
'threshold': 0.30,
'gpu': 1.0
}

tfnet = TFNet(option)
colors = [tuple(255 * np.random.rand(3)) for i in range(5)]
frame=cv2.imread("custom-2/3.jpg",1)
frame=cv2.resize(frame,None,fx=0.5,fy=0.5)
#frame=cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
tl = (result['topleft']['x'], result['topleft']['y'])
br = (result['bottomright']['x'], result['bottomright']['y'])
img=frame[tl[1]:br[1],tl[0]:br[0]]
img=cv2.resize(img,(64,64))
img=img[np.newaxis,...]
res=model.predict(img)
label = str(np.argmax(res[0]))+","+str(np.argmax(res[1]))
frame = cv2.rectangle(frame, tl, br, color, 7)
frame = cv2.putText(frame, label, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255), 2)

cv2.imshow('frame', frame)
cv2.waitKey(0);
cv2.destroyAllWindows()

from keras import backend as K
K.clear_session()

最佳答案

尝试这个解决方案。我遇到了和你完全相同的问题,这为我解决了。

$ sudo apt-get install python-pip python3-pip
$ sudo pip3 uninstall tensorflow
$ git clone https://github.com/PINTO0309/Tensorflow-bin.git
$ cd Tensorflow-bin
$ sudo pip3 install tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl

或将 tensorflow 降级到 1.11.0 的任何其他替代方案。

关于python - Yolo Darkflow 错误。 tensorflow.python.framework.errors_impl.InvalidArgumentError : Invalid name,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55196713/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com