gpt4 book ai didi

python - 如何在 OpenVINO 中为 Tacotron 模型设置模型优化器的输入形状?

转载 作者:行者123 更新时间:2023-11-28 18:58:25 26 4
gpt4 key购买 nike

我正在尝试让 KeithIto 的 Tacotron 模型在带有 NCS 的英特尔 OpenVINO 上运行。模型优化器无法将卡住模型转换为 IR 格式。

在 Intel 论坛上询问后,我被告知 2018 R5 版本不支持 GRU,我将其更改为 LSTM 单元。但是模型经过训练后在tensorflow中仍然运行良好。我还更新了我的 OpenVINO 到 2019 R1 版本。但是优化器仍然抛出错误。该模型主要有两个输入节点:inputs[N,T_in]和input_lengths[N];其中 N 是批量大小,T_in 是输入时间序列中的步数,值是字符 ID,默认形状为 [1,?] 和 [1]。问题出在 [1,?] 上,因为模型优化器不允许动态形状。我尝试了不同的值,它总是会抛出一些错误。

我尝试了带有输出节点“model/griffinlim/Squeeze”的卡住图,这是最终的解码器输出,还带有“model/inference/dense/BiasAdd”,如(https://github.com/keithito/tacotron/issues/95#issuecomment-362854371)中提到的,这是 Griffin 的输入-lim 声码器,以便我可以在模型外执行 Spectrogram2Wav 部分并降低其复杂性。

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model "D:\Programming\LSTM\logs-tacotron\freezeinf.pb" --freeze_placeholder_with_value "input_lengths->[1]" --input inputs --input_shape [1,128] --output model/inference/dense/BiasAdd
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: D:\Programming\Thesis\LSTM\logs-tacotron\freezeinf.pb
- Path for generated IR: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
- IR output name: freezeinf
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: inputs
- Output layers: model/inference/dense/BiasAdd
- Input shapes: [1,128]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.0-341-gc9b66a2
[ ERROR ] Shape [ 1 -1 128] is not fully defined for output 0 of "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1".
[ ERROR ] Not all output shapes were inferred or fully defined for node "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_eltwise_ext.<locals>.<lambda> at 0x000001F00598FE18>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

我还尝试了不同的方法来卡住图表。

方法一:在转储图形后使用 Tensorflow 中提供的 freeze_graph.py:

tf.train.write_graph(self.session.graph.as_graph_def(), "models/", "graph.pb", as_text=True)

其次是:

python freeze_graph.py --input_graph .\models\graph.pb  --output_node_names "model/griffinlim/Squeeze" --output_graph .\logs-tacotron\freezeinf.pb --input_checkpoint .\logs-tacotron\model.ckpt-33000 --input_binary=true

方法 2:加载模型后使用以下代码:

frozen = tf.graph_util.convert_variables_to_constants(self.session,self.session.graph_def, ["model/inference/dense/BiasAdd"]) #model/griffinlim/Squeeze
graph_io.write_graph(frozen, "models/", "freezeinf.pb", as_text=False)

我预计 BatchNormalization 和 Dropout 层会在卡住后被删除,但查看错误似乎它仍然存在。

环境

操作系统:Windows 10 专业版

python 3.6.5

tensorflow 1.12.0

OpenVINO 2019 R1 发布

任何人都可以帮助解决上述优化器问题吗?

最佳答案

OpenVINO 尚不支持此模型。我们会及时通知您最新消息。

关于python - 如何在 OpenVINO 中为 Tacotron 模型设置模型优化器的输入形状?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55611848/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com