gpt4 book ai didi

python - 在从 PyTorch 训练和导出的 LibTorch 中运行模型时获得的结果不正确

转载 作者:行者123 更新时间:2023-11-28 04:15:21 35 4
gpt4 key购买 nike

我正在尝试使用 LibTorch 导出经过训练的模型以及用于在 C++ 中进行推理的权重。但是,输出张量结果不匹配。

输出张量的形状相同。

model = FCN()
state_dict = torch.load('/content/gdrive/My Drive/model/trained_model.pth')
model.load_state_dict(state_dict)
example = torch.randn(1, 3, 768, 1024)
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save('/content/gdrive/My Drive/model/mymodel.pt')

但是会生成一些警告,我认为这可能会导致生成不正确的结果。

/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:137: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! /usr/local/lib/python3.6/dist-packages/torch/tensor.py:435: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).'incorrect results).', category=RuntimeWarning)

以下是生成输出张量的LibTorch代码

at::Tensor predict(std::shared_ptr<torch::jit::script::Module> model, at::Tensor &image_tensor) {
std::vector<torch::jit::IValue> inputs;
inputs.push_back(image_tensor);

at::Tensor result = model->forward(inputs).toTensor();

return result;
}

有人试过在 LibTorch 中使用经过训练的 PyTorch 模型吗?

最佳答案

刚刚遇到了同样的问题,找到了解决方案:添加

模型.eval()

之前

traced_script_module = torch.jit.trace(model, example)

并且该模型在 C++ 中的结果与在 Python 中的结果相同

关于python - 在从 PyTorch 训练和导出的 LibTorch 中运行模型时获得的结果不正确,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56770197/

35 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com