gpt4 book ai didi

Difference between model training in PyTorch and libTorch(PyTorch和libTorch中模型训练的差异)

转载 作者:bug小助手 更新时间:2023-10-25 19:28:18 28 4
gpt4 key购买 nike



I was trying to train a model in C++ API (libtorch) and got bad results. I then compared in a detailed way to pytorch. My network is very simple (linear with 4 inputs and outputs and only one hidden layer). In order to compare I set all in sequential in the dataloader (on both sides : shuffle=False in python and SequentialSampler for the dataloader in API), the same random seed, same batch size and learning rate, same optimizer (SGD), same loss function. And the results were still bad in API and good in python.

我试图用C++API(Libtorch)训练一个模型,但得到了很差的结果。然后,我详细地将其比作柴火。我的网络非常简单(线性,有4个输入和输出,只有一个隐藏层)。为了进行比较,我在dataloader中按顺序设置了ALL(两端:在Python中的Shuffle=False,在API中的DataLoader的SequentialSsamer)、相同的随机种子、相同的批大小和学习速率、相同的优化器(SGD)、相同的损失函数。而且在API中的结果仍然很差,在Python中的结果仍然很好。


更多回答

Don't add "solved" to the title. Either post your solution below to help others with the same problem (and press the checkmark next to it to mark the question as solved), or delete the question altogether.

不要在标题中加上“已解决”。或者在下面发布您的解决方案以帮助其他有相同问题的人(并按下旁边的复选标记将该问题标记为已解决),或者完全删除该问题。

优秀答案推荐

(Posting answer on behalf of the question author to move it to the answer space).

(代表问题作者张贴答案以将其移动到答案空间)。


I finally found that in torch API the mse_loss reduction is set by default to mean, which seems to be much less efficient than what is common in python (reduction='sum'). This is done through:

我最终发现,在Torch API中,MSE_LOSS REDUCTION缺省设置为Mean,这似乎比在Python中常见的设置(Reduction=‘sum’)要低得多。这是通过以下方式完成的:


namespace F = torch::nn::functional;
auto loss = F::mse_loss(output, target, F::MSELossFuncOptions(torch::kSum));

更多回答

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com