gpt4 book ai didi

python-3.x - 当模型包含张量操作时,Pytorch DataParallel 不起作用

转载 作者:行者123 更新时间:2023-12-05 02:08:32 28 4
gpt4 key购买 nike

如果我的模型仅包含 nn.Module 层,例如 nn.Linear,则 nn.DataParallel 可以正常工作。

x = torch.randn(100,10)

class normal_model(torch.nn.Module):
def __init__(self):
super(normal_model, self).__init__()
self.layer = torch.nn.Linear(10,1)

def forward(self, x):
return self.layer(x)

model = normal_model()
model = nn.DataParallel(model.to('cuda:0'))
model(x)

但是,当我的模型包含如下张量运算时

class custom_model(torch.nn.Module):
def __init__(self):
super(custom_model, self).__init__()
self.layer = torch.nn.Linear(10,5)
self.weight = torch.ones(5,1, device='cuda:0')
def forward(self, x):
return self.layer(x) @ self.weight

model = custom_model()
model = torch.nn.DataParallel(model.to('cuda:0'))
model(x)

它给了我以下错误

RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "", line 7, in forward return self.layer(x) @ self.weight RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:277

当我们的模型中有一些张量操作时,如何避免这个错误?

最佳答案

我没有使用 DataParallel 的经验,但我认为这可能是因为您的张量不是模型参数的一部分。你可以这样写:

torch.nn.Parameter(torch.ones(5,1))

请注意,您不必在初始化时将其移动到 gpu,因为现在当您调用 model.to('cuda:0') 时,这是自动完成的。

我可以想象 DataParallel 使用模型参数将它们移动到适当的 gpu。

参见 this answer有关 torch 张量和 torch.nn.Parameter 之间区别的更多信息。

如果你不想在训练过程中通过反向传播更新张量值,你可以添加requires_grad=False

另一种可行的方法是覆盖 to 方法,并在正向传递中初始化张量:

class custom_model(torch.nn.Module):
def __init__(self):
super(custom_model, self).__init__()
self.layer = torch.nn.Linear(10,5)
def forward(self, x):
return self.layer(x) @ torch.ones(5,1, device=self.device)
def to(self, device: str):
new_self = super(custom_model, self).to(device)
new_self.device = device
return new_self

或者类似这样的东西:

class custom_model(torch.nn.Module):
def __init__(self, device:str):
super(custom_model, self).__init__()
self.layer = torch.nn.Linear(10,5)
self.weight = torch.ones(5,1, device=device)
def forward(self, x):
return self.layer(x) @ self.weight
def to(self, device: str):
new_self = super(custom_model, self).to(device)
new_self.device = device
new_self.weight = torch.ones(5,1, device=device)
return new_self

关于python-3.x - 当模型包含张量操作时,Pytorch DataParallel 不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60799655/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com