gpt4 book ai didi

deep-learning - Pytorch 中正确的验证损失?

转载 作者:行者123 更新时间:2023-12-05 02:00:29 25 4
gpt4 key购买 nike

我对如何计算验证损失有点困惑?验证损失是要在时代结束时计算,还是应该在批处理迭代期间也监测损失?下面我使用 running_loss 进行了计算,它在批处理中累积 - 但我想看看它是否是正确的方法?

def validate(loader, model, criterion):                       
correct = 0
total = 0
running_loss = 0.0
model.eval()
with torch.no_grad():
for i, data in enumerate(loader):
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)

outputs = model(inputs)
loss = criterion(outputs, labels)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
running_loss = running_loss + loss.item()
mean_val_accuracy = (100 * correct / total)
mean_val_loss = ( running_loss )
#mean_val_accuracy = accuracy(outputs,labels)
print('Validation Accuracy: %d %%' % (mean_val_accuracy))
print('Validation Loss:' ,mean_val_loss )

下面是我正在使用的训练 block

def train(loader, model, criterion, optimizer, epoch):                                   
correct = 0
running_loss = 0.0
i_max = 0
for i, data in enumerate(loader):
total_loss = 0.0
#print('batch=',i)
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)

optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()

running_loss += loss.item()
if i % 2000 == 1999:
print('[%d , %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0

print('finished training')
return mean_val_loss, mean_val_accuracy

最佳答案

您可以根据需要在验证时评估您的网络。它可以是每个时期,或者如果因为数据集很大而成本太高,它可以是每个 N 时期。

你所做的似乎是正确的,你计算了整个验证集的损失。您可以选择除以它的长度以标准化损失,因此如果有一天您增加验证集,规模将是相同的。

关于deep-learning - Pytorch 中正确的验证损失?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67295494/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com