gpt4 book ai didi

即使所有变量的 requires_grad = False,PyTorch 损失也会减少

转载 作者:行者123 更新时间:2023-12-02 16:54:11 25 4
gpt4 key购买 nike

当我使用 PyTorch 创建神经网络时,使用 torch.nn.Sequential 方法定义层,参数似乎默认为 requires_grad = False。但是,当我训练这个网络时,损失会减少。如果图层不通过渐变更新,这怎么可能?

例如,这是定义我的网络的代码:

class Network(torch.nn.Module):

def __init__(self):
super(Network, self).__init__()
self.layers = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.Linear(5, 2)
)
print('Network Parameters:')
model_dict = self.state_dict()
for param_name in model_dict:
param = model_dict[param_name]
print('Name: ' + str(param_name))
print('\tRequires Grad: ' + str(param.requires_grad))

def forward(self, input):
prediction = self.layers(input)
return prediction

然后打印出来:

Network Parameters:
Name: layers.0.weight
Requires Grad: False
Name: layers.0.bias
Requires Grad: False
Name: layers.1.weight
Requires Grad: False
Name: layers.1.bias
Requires Grad: False

然后这是训练我的网络的代码:

network = Network()
network.train()
optimiser = torch.optim.SGD(network.parameters(), lr=0.001)
criterion = torch.nn.MSELoss()
inputs = np.random.random([100, 10]).astype(np.float32)
inputs = torch.from_numpy(inputs)
labels = np.random.random([100, 2]).astype(np.float32)
labels = torch.from_numpy(labels)


while True:
prediction = network.forward(inputs)
loss = criterion(prediction, labels)
print('loss = ' + str(loss.item()))
optimiser.zero_grad()
loss.backward()
optimiser.step()

然后打印出来:

loss = 0.284633219242
loss = 0.278225809336
loss = 0.271959483624
loss = 0.265835255384
loss = 0.259853869677
loss = 0.254015892744
loss = 0.248321473598
loss = 0.242770522833
loss = 0.237362638116
loss = 0.232097044587
loss = 0.226972639561
loss = 0.221987977624
loss = 0.217141270638
loss = 0.212430402637
loss = 0.207852959633
loss = 0.203406244516
loss = 0.199087426066
loss = 0.19489350915
loss = 0.190821439028
loss = 0.186868071556
loss = 0.183030322194
loss = 0.179305106401
loss = 0.175689414144
loss = 0.172180294991
loss = 0.168774917722
loss = 0.165470585227
loss = 0.162264674902
loss = 0.159154698253

如果所有参数都具有 requires_grad = False,为什么损失会减少?

最佳答案

这很有趣——state_dict()parameters() 之间似乎有区别:

class Network(torch.nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.Linear(5, 2)
)
print(self.layers[0].weight.requires_grad) # True
print(self.state_dict()['layers.0.weight'].requires_grad) # False
print(list(self.parameters())[0].requires_grad) # True

def forward(self, input):
prediction = self.layers(input)
return prediction

因此看起来您的损失正在减少,因为网络实际上正在学习,因为 requires_grad 为 True。 (一般来说,对于调试,我更喜欢查询实际对象 (self.layers[0]...)。

[编辑] 啊哈 - 发现了问题:有一个 keep_vars bool 选项,你可以将它传递给 state_dict ,它执行以下操作(除其他外):( https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L665 )

for name, param in self._parameters.items():
if param is not None:
destination[prefix + name] = param if keep_vars else param.data

因此,如果您需要实际的param,请使用keep_vars=True——如果您只需要数据,请使用默认的keep_vars=False

所以:

print(self.layers[0].weight.requires_grad) # True
print(self.state_dict(keep_vars=True)['layers.0.weight'].requires_grad) # True
print(list(self.parameters())[0].requires_grad) # True

关于即使所有变量的 requires_grad = False,PyTorch 损失也会减少,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57171426/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com