gpt4 book ai didi

machine-learning - `THIndexTensor_(size)(target, 0) == batch_size' 失败。在 d :\projects\pytorch\torch\lib\thnn\generic/ClassNLLCriterion. c:54

转载 作者:行者123 更新时间:2023-11-30 08:36:13 26 4
gpt4 key购买 nike

我正在尝试在狗品种数据集上训练我的神经网络。前馈后,在损失计算期间它会抛出此错误:

RuntimeError: Assertion `THIndexTensor_(size)(target, 0) == batch_size' failed.  at d:\projects\pytorch\torch\lib\thnn\generic/ClassNLLCriterion.c:54 

代码:

criterion =nn.CrossEntropyLoss()
optimizer=optim.Adam(net.parameters(),lr=0.001)


for epoch in range(10): # loop over the dataset multiple times
running_loss = 0.0
print(len(trainloader))
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data

# wrap them in Variable
inputs, labels = Variable(inputs).float(), Variable(labels).float().type(torch.LongTensor)


# zero the parameter gradients
optimizer.zero_grad()

# forward + backward + optimize
outputs = net(inputs)

loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0

print('Finished Training')

此行生成错误:

loss = criterion(outputs, labels)

什么问题?

最佳答案

我认为问题是您缺少张量标签上的批处理维度。该错误表明第 0 维的大小不等于批量大小。

尝试更改此设置:

loss = criterion(outputs, labels.unsqueeze(0))

请注意,outputs 张量应比对应于每个标签分数的 labels 张量多一维,并且 labels 应只需包含正确标签的索引。

关于machine-learning - `THIndexTensor_(size)(target, 0) == batch_size' 失败。在 d :\projects\pytorch\torch\lib\thnn\generic/ClassNLLCriterion. c:54,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47492033/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com