gpt4 book ai didi

python - PyTorch 自定义损失函数

转载 作者:太空狗 更新时间:2023-10-30 00:08:55 24 4
gpt4 key购买 nike

自定义损失函数应该如何实现?使用以下代码会导致错误:

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
import torch.utils.data as data_utils
import torch.nn as nn
import torch.nn.functional as F

num_epochs = 20

x1 = np.array([0,0])
x2 = np.array([0,1])
x3 = np.array([1,0])
x4 = np.array([1,1])

num_epochs = 200

class cus2(torch.nn.Module):

def __init__(self):
super(cus2,self).__init__()

def forward(self, outputs, labels):
# reshape labels to give a flat vector of length batch_size*seq_len
labels = labels.view(-1)

# mask out 'PAD' tokens
mask = (labels >= 0).float()

# the number of tokens is the sum of elements in mask
num_tokens = int(torch.sum(mask).data[0])

# pick the values corresponding to labels and multiply by mask
outputs = outputs[range(outputs.shape[0]), labels]*mask

# cross entropy loss for all non 'PAD' tokens
return -torch.sum(outputs)/num_tokens


x = torch.tensor([x1,x2,x3,x4]).float()

y = torch.tensor([0,1,1,0]).long()

train = data_utils.TensorDataset(x,y)
train_loader = data_utils.DataLoader(train , batch_size=2 , shuffle=True)

device = 'cpu'

input_size = 2
hidden_size = 100
num_classes = 2

learning_rate = .0001

class NeuralNet(nn.Module) :
def __init__(self, input_size, hidden_size, num_classes) :
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size , hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size , num_classes)

def forward(self, x) :
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out

for i in range(0 , 1) :

model = NeuralNet(input_size, hidden_size, num_classes).to(device)

criterion = nn.CrossEntropyLoss()
# criterion = Regress_Loss()
# criterion = cus2()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

total_step = len(train_loader)
for epoch in range(num_epochs) :
for i,(images , labels) in enumerate(train_loader) :
images = images.reshape(-1 , 2).to(device)
labels = labels.to(device)

outputs = model(images)
loss = criterion(outputs , labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()
# print(loss)

outputs = model(x)

print(outputs.data.max(1)[1])

对训练数据做出完美预测:

tensor([0, 1, 1, 0])

使用来自 here 的自定义损失函数:

image of the code used for the cus2 class

在上面的代码中实现为cus2

取消注释代码 # criterion = cus2() 以使用此损失函数返回:

tensor([0, 0, 0, 0])

同时返回一个警告:

UserWarning: invalid index of a 0-dim tensor. This will be an error inPyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Pythonnumber

我没有正确实现自定义损失函数?

最佳答案

除以下情况外,您的损失函数在编程上是正确的:

    # the number of tokens is the sum of elements in mask
num_tokens = int(torch.sum(mask).data[0])

当您执行 torch.sum 时,它会返回一个 0 维张量,因此会出现无法索引的警告。要解决此问题,请按照建议执行 int(torch.sum(mask).item())int(torch.sum(mask)) 也可以。

现在,您是否尝试使用自定义损失来模拟 CE 损失?如果是,那么您缺少 log_softmax

要解决此问题,请在语句 4 之前添加 outputs = torch.nn.functional.log_softmax(outputs, dim=1)。请注意,如果您已附加教程,log_softmax 已经在转发调用中完成。你也可以那样做。

另外,我注意到学习速度很慢,即使有 CE 损失,结果也不一致。在自定义和 CE 损失的情况下,将学习率提高到 1e-3 对我来说效果很好。

关于python - PyTorch 自定义损失函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53980031/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com