gpt4 book ai didi

python - RNN 参数没有更新?

转载 作者:行者123 更新时间:2023-11-30 09:19:35 30 4
gpt4 key购买 nike

我对 PyTorch 非常陌生,而且对一般神经网络也相当陌生。
我试图构建一个可以猜测性别名字的神经网络,并且基于判断国籍的 PyTorch RNN 教程。
我的代码运行没有错误,但损失几乎没有变化,让我认为权重没有更新......
这是我的输入/输出/目标张量设置的问题吗?或者我的训练功能有问题?我很迷失,任何帮助将不胜感激:cold_sweat:
这是我的代码:

from __future__ import unicode_literals, print_function, division  
from io import open
import glob
import unicodedata
import string
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import random
from torch.autograd import Variable

"""------GLOBAL VARIABLES------"""

all_letters = string.ascii_letters + " .,;'"
num_letters = len(all_letters)
all_names = {}
genders = ["Female", "Male"]

"""-------DATA EXTRACTION------"""

def findFiles(path):
return glob.glob(path)

def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)

# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]

for file in findFiles("/home/andrew/PyCharm/PycharmProjects/CantStop/data/names/*.txt"):
gender = file.split("/")[-1].split(".")[0]
names = readLines(file)
all_names[gender] = names

"""-----DATA INTERPRETATION-----"""

def nameToTensor(name):
tensor = torch.zeros(len(name), 1, num_letters)
for index, letter in enumerate(name):
tensor[index][0][all_letters.find(letter)] = 1
return tensor

def outputToGender(output):
gender, gender_index = output.data.topk(1)
if gender_index[0][0] == 0:
return "Female"
return "Male"

"""------NETWORK SETUP------"""

class Net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Net, self).__init__()
self.hidden_size = hidden_size
#Layer 1
self.Lin1 = nn.Linear(input_size+hidden_size, int((input_size+hidden_size)/2))
self.ReLu1 = nn.ReLU()
self.Batch1 = nn.BatchNorm1d(int((input_size+hidden_size)/2))
#Layer 2
self.Lin2 = nn.Linear(int((input_size+hidden_size)/2), output_size)
self.ReLu2 = nn.ReLU()
self.Batch2 = nn.BatchNorm1d(output_size)
self.softMax = nn.LogSoftmax()
#Hidden layer
self.HidLin = nn.Linear(input_size+hidden_size, hidden_size)
self.HidReLu = nn.ReLU()
self.HidBatch = nn.BatchNorm1d(hidden_size)

def forward(self, input, hidden):
comb = torch.cat((input, hidden), 1)
hidden = self.HidBatch(self.HidReLu(self.HidLin(comb)))
output1 = self.Batch1(self.ReLu1(self.Lin1(comb)))
output2 = self.softMax(self.Batch2(self.ReLu2(self.Lin2(output1))))
return output2, hidden

def initHidden(self):
return Variable(torch.zeros(1, self.hidden_size))

NN = Net(num_letters, 128, 2)

"""------TRAINING------"""

def getRandomTrainingEx():
gender = genders[random.randint(0, 1)]
name = all_names[gender][random.randint(0, len(all_names[gender])-1)]
gender_tensor = Variable(torch.LongTensor([genders.index(gender)]))
name_tensor = Variable(nameToTensor(name))
return gender_tensor, name_tensor, gender

def train(input, target):
hidden = NN.initHidden()

loss_func = nn.NLLLoss()

alpha = 0.01

NN.zero_grad()

for i in range(input.size()[0]):
output, hidden = NN(input[i], hidden)

loss = loss_func(output, target)
loss.backward()
for w in NN.parameters():
w.data.add_(-alpha, w.grad.data)

return output, loss

for i in range(5000):
gender_tensor, name_tensor, gender = getRandomTrainingEx()
output, loss = train(name_tensor, gender_tensor)

if i%500 == 0:
print("Guess: %s, Correct: %s, Loss: %s" % (outputToGender(output), gender, loss.data[0]))

这是输出:

Guess: Male, Correct: Male, Loss: 0.6931471824645996
Guess: Male, Correct: Female, Loss: 0.7400936484336853
Guess: Male, Correct: Male, Loss: 0.6755779385566711
Guess: Female, Correct: Female, Loss: 0.6648257374763489
Guess: Male, Correct: Male, Loss: 0.6765623688697815
Guess: Female, Correct: Male, Loss: 0.7330614924430847
Guess: Female, Correct: Female, Loss: 0.6565149426460266
Guess: Male, Correct: Female, Loss: 0.6946508884429932
Guess: Female, Correct: Female, Loss: 0.6621525287628174
Guess: Male, Correct: Male, Loss: 0.6662092804908752

Process finished with exit code 0

最佳答案

我建议您将add_更改为sub_。 add_ 可能会让你远离最佳点。

w.data.sub_(f.grad.data * alpha)

因为,权重更新公式中有一个减法。

updating

顺便说一下,尝试将 alpha 增加/减少到 0.1 0.05 0.01。如果alpha太大,可能会错过最佳点。如果 alpha 太小,则需要很长时间。

关于python - RNN 参数没有更新?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45016714/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com