gpt4 book ai didi

tensorflow - 我正在尝试对 tensorflow lite 模型进行量化的移动网络模型,但遇到错误

转载 作者:行者123 更新时间:2023-12-02 01:03:29 25 4
gpt4 key购买 nike

首先,我从 Mobilenet 下载了一个量化模型。它包含在 Mobilenet_v1_1.0_224 中。然后我做了以下

bazel-bin/tensorflow/contrib/lite/toco/toco \
> --input_files=Sample/mobilenet_v1_1.0_224/quantized_graph.pb \
> --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
> --output_file=Sample/mobilenet_v1_1.0_224/quantized_graph.tflite --inference_type=QUANTIZED_UINT8 \
> --input_shape=1,224,224,3 \
> --input_array=input \
> --output_array=MobilenetV1/Predictions/Reshape_1 \
> --mean_value=128 \
> --std_value=127

以下是图表的总结

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=Sample/mobilenet_v1_1.0_224/quantized_graph.pb
Found 1 possible inputs: (name=input, type=float(1), shape=[1,224,224,3])
No variables spotted.
Found 1 possible outputs: (name=MobilenetV1/Predictions/Reshape_1, op=Reshape)
Found 4227041 (4.23M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 91 Const, 27 Add, 27 Relu6, 15 Conv2D, 13 DepthwiseConv2dNative, 13 Mul, 10 Dequantize, 2 Reshape, 1 Identity, 1 Placeholder, 1 BiasAdd, 1 AvgPool, 1 Softmax, 1 Squeeze
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=Sample/mobilenet_v1_1.0_224/quantized_graph.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=1,224,224,3 --output_layer=MobilenetV1/Predictions/Reshape_1

所以通过转换,我遇到了以下错误

2018-03-01 23:12:03.353786: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.354513: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355177: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355556: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355921: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.356281: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.356632: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.357540: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.358776: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.360448: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.366319: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 140 operators, 232 arrays (0 quantized) 2018-03-01 23:12:03.371405: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 140 operators, 232 arrays (0 quantized) 2018-03-01 23:12:03.374916: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 63 operators, 152 arrays (1 quantized) 2018-03-01 23:12:03.376325: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 63 operators, 152 arrays (1 quantized) 2018-03-01 23:12:03.377492: F tensorflow/contrib/lite/toco/tooling_util.cc:1272] Array MobilenetV1/MobilenetV1/Conv2d_0/Relu6, which is an input to the DepthwiseConv operator producing the output array MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6, is lacking min/max data, which is necessary for quantization. Either target a non-quantized output format, or change the input graph to contain min/max information, or pass --default_ranges_min= and --default_ranges_max= if you do not care about the accuracy of results.

感谢您的帮助

最佳答案

我认为您可能指的是旧的 TensorFlow 量化移动网络模型。

我们更新了可用的量化移动网络模型 here .您的深度乘数为 1.0 和图像大小为 224 的具体链接是 this .

这些 tar 文件也随已转换的 TFLite FlatBuffers 模型一起提供。

希望对您有所帮助!

关于tensorflow - 我正在尝试对 tensorflow lite 模型进行量化的移动网络模型,但遇到错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49064492/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com