gpt4 book ai didi

machine-learning - 使用 pytorch 训练 RNN 模型,目标不重要

转载 作者:行者123 更新时间:2023-11-30 09:48:29 25 4
gpt4 key购买 nike

我正在尝试训练一个简单的 RNN 模型,其目标很简单,即无论输入如何,输出都与固定向量匹配

import torch
import torch.nn as nn

from torch.autograd import Variable
import numpy as np

class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
print "i2h WEIGHT size ", list(self.i2h.weight.size())
print "i2h bias size ", list(self.i2h.bias.size())
self.i2o = nn.Linear(hidden_size, output_size)
print "i2o WEIGHT size ", list(self.i2o.weight.size())
print "i2o bias size ", list(self.i2o.bias.size())
self.softmax = nn.LogSoftmax(dim=1)

def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(hidden)
output = self.softmax(output)
return output, hidden

def initHidden(self):
return Variable(torch.zeros(1, self.hidden_size))

n_hidden = 20
rnn = RNN(10, n_hidden, 3)


learning_rate = 1e-3
loss_fn = torch.nn.MSELoss(size_average=False)
out_target = Variable( torch.FloatTensor([[0.0 , 1.0, 0.0]] ) , requires_grad=False)

print "target output::: ", out_target
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()

rnn.zero_grad()

for i in range(line_tensor.size()[0]):
#print "train iteration ", i, ": input data: ", line_tensor[i]
output, hidden = rnn(line_tensor[i], hidden)


loss = loss_fn(output, out_target)
loss.backward()

# Add parameters' gradients to their values, multiplied by learning rate
for p in rnn.parameters():
#print "parameter: ", p, " gradient: ", p.grad.data
p.data.add_(-learning_rate, p.grad.data)

return output, loss.data[0]

current_loss = 0
n_iters = 500

for iter in range(1, n_iters + 1):
inp = Variable(torch.randn(100,1,10) + 5)
output, loss = train(out_target, inp)
current_loss += loss
if iter % 1 == 0:
print "weights: ",rnn.i2h.weight
print "LOSS: ", loss
print output

如图所示,损失保持在 6 以上并且永远不会下降。另请注意,我将所有随机输入正态分布偏置 5,因此它们大多是正数,因此应该存在接近目标输出的权重分布

在这个例子中我做错了什么,导致输出无法达到目标?

最佳答案

您的固定输出是:

torch.FloatTensor([[0.0, 1.0, 0.0]])

但是您使用以下内容作为 RNN 中的最后一层:

self.softmax = nn.LogSoftmax(dim=1)

LogSoftmax 是否返回 [0, 1] 中的值?不过,您可以使用 Softmax 但我建议您使用 sign函数并将 -1 转换为 0。

关于machine-learning - 使用 pytorch 训练 RNN 模型,目标不重要,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49115560/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com