gpt4 book ai didi

python - 应使用 `device` 或将字符串作为参数传递来设置 `torch.device` 参数

转载 作者:太空宇宙 更新时间:2023-11-04 04:13:37 32 4
gpt4 key购买 nike

我的数据迭代器当前在 CPU 上运行,因为 device=0 参数已弃用。但我需要它与模型的其余部分一起在 GPU 上运行。

这是我的代码:

pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, device, batch_size=BATCH_SIZE,
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, device, batch_size=BATCH_SIZE,
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=False)
#model_par = nn.DataParallel(model, device_ids=devices)

上面的代码给出了这个错误:

The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.

我尝试传入 'cuda' 作为参数而不是 device=0 但我收到此错误:

<ipython-input-50-da3b1f7ed907> in <module>()
10 train_iter = MyIterator(train, 'cuda', batch_size=BATCH_SIZE,
11 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
---> 12 batch_size_fn=batch_size_fn, train=True)
13 valid_iter = MyIterator(val, 'cuda', batch_size=BATCH_SIZE,
14 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),

TypeError: __init__() got multiple values for argument 'batch_size'

我还尝试将 device 作为参数传入。设备被定义为 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

但是收到与上面相同的错误。

如有任何建议,我们将不胜感激,谢谢。

最佳答案

pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, batch_size=BATCH_SIZE, device = torch.device('cuda'),
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, batch_size=BATCH_SIZE, device = torch.device('cuda'),
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=False)

经过大量的反复试验,我设法将 device 设置为 device = torch.device('cuda') 而不是 device=0

关于python - 应使用 `device` 或将字符串作为参数传递来设置 `torch.device` 参数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55883389/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com