gpt4 book ai didi

are there any methods to get per layer inference time of an onnx model?(有什么方法可以得到ONX模型的每层推理时间吗?)

转载 作者:bug小助手 更新时间:2023-10-24 19:42:44 45 4
gpt4 key购买 nike



The code is like

代码如下所示


import onnxruntime

onnx_input= np.random.normal(size=[1, 3, 224, 224]).astype(np.float32)
ort_sess = onnxruntime.InferenceSession('model.onnx')
ort_inputs = {ort_sess.get_inputs()[0].name: onnx_input}
ort_outs = ort_sess.run(None, ort_inputs)

I can get the network output from ort_outs, but how can I get the infernce time of each layer of the model?

我可以从ort_out获得网络输出,但如何才能获得模型每一层的推断时间?


I can get the model graph info by

我可以通过以下方式获得模型图信息


import onnx
model = onnx.load("model.onnx")
print(onnx.helper.printable_graph(model.graph))

or get the total inference time by

或通过以下方式获得总推理时间


import time

start = time.time()
ort_outs = ort_sess.run(None, ort_inputs)
end = time.time()
print(end - start)

but I don't know how to get the inference time per layer of the neural network. Thanks!

但我不知道如何得到神经网络每一层的推理时间。谢谢!


更多回答
优秀答案推荐

Please see https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html for details on enabling profiling of individual nodes.

有关启用单个节点的性能分析的详细信息,请参阅https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html。


Note that the overall inference time will be meaningless when it is measuring per-node performance due to the overhead of outputting the profiling data.

请注意,在测量每个节点的性能时,由于输出性能分析数据的开销,总体推断时间将毫无意义。


更多回答

45 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com