- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用 Pytorch 训练 BiLSTM-CRF 检测新的 NER 实体。
为此,我使用了从 Pytorch Advanced tutorial 派生的一段代码。 This snippet 实现批量训练。
我遵循自述文件以根据需要显示数据。在 CPU 上一切正常,但是当我尝试将其连接到 GPU 时,出现以下错误:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-794982510db6> in <module>
4 batch_input, batch_input_lens, batch_mask, batch_target = batch_info
5
----> 6 loss_train = model.neg_log_likelihood(batch_input, batch_input_lens, batch_mask, batch_target)
7 optimizer.zero_grad()
8 loss_train.backward()
<ipython-input-11-e44ffbf7d75f> in neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target)
185
186 def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target):
--> 187 feats = self.bilstm(batch_input, batch_input_lens, batch_mask)
188 gold_score = self.CRF.score_sentence(feats, batch_target)
189 forward_score = self.CRF.score_z(feats, batch_input_lens)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
<ipython-input-11-e44ffbf7d75f> in forward(self, batch_input, batch_input_lens, batch_mask)
46 batch_input = self.word_embeds(batch_input) # size: #batch * padding_length * embedding_dim
47 batch_input = rnn_utils.pack_padded_sequence(
---> 48 batch_input, batch_input_lens, batch_first=True)
49 batch_output, self.hidden = self.lstm(batch_input, self.hidden)
50 self.repackage_hidden(self.hidden)
/opt/conda/lib/python3.7/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
247
248 data, batch_sizes = \
--> 249 _VF._pack_padded_sequence(input, lengths, batch_first)
250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
251
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor`
如果我理解得很好,pack_padded_sequence 需要张量在 CPU 而不是 GPU 上。不幸的是,我的前向函数调用了 pack_padded_sequence,如果不回到 CPU 进行整个训练,我看不出有任何方法可以这样做。
import torch
import torch.nn as nn
import torch.nn.utils.rnn as rnn_utils
class BiLSTM(nn.Module):
def __init__(self, vocab_size, tagset, embedding_dim, hidden_dim,
num_layers, bidirectional, dropout, pretrained=None):
super(BiLSTM, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.tagset_size = len(tagset)
self.bidirectional = bidirectional
self.num_layers = num_layers
self.word_embeds = nn.Embedding(vocab_size+2, embedding_dim)
if pretrained is not None:
self.word_embeds = nn.Embedding.from_pretrained(pretrained)
self.lstm = nn.LSTM(
input_size=embedding_dim,
hidden_size=hidden_dim // 2 if bidirectional else hidden_dim,
num_layers=num_layers,
dropout=dropout,
bidirectional=bidirectional,
batch_first=True,
)
self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)
self.hidden = None
def init_hidden(self, batch_size, device):
init_hidden_dim = self.hidden_dim // 2 if self.bidirectional else self.hidden_dim
init_first_dim = self.num_layers * 2 if self.bidirectional else self.num_layers
self.hidden = (
torch.randn(init_first_dim, batch_size, init_hidden_dim).to(device),
torch.randn(init_first_dim, batch_size, init_hidden_dim).to(device)
)
def repackage_hidden(self, hidden):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(hidden, torch.Tensor):
return hidden.detach_().to(device)
else:
return tuple(self.repackage_hidden(h) for h in hidden)
def forward(self, batch_input, batch_input_lens, batch_mask):
batch_size, padding_length = batch_input.size()
batch_input = self.word_embeds(batch_input) # size: #batch * padding_length * embedding_dim
batch_input = rnn_utils.pack_padded_sequence(
batch_input, batch_input_lens, batch_first=True)
batch_output, self.hidden = self.lstm(batch_input, self.hidden)
self.repackage_hidden(self.hidden)
batch_output, _ = rnn_utils.pad_packed_sequence(batch_output, batch_first=True)
batch_output = batch_output.contiguous().view(batch_size * padding_length, -1)
batch_output = batch_output[batch_mask, ...]
out = self.hidden2tag(batch_output)
return out
def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target):
loss = nn.CrossEntropyLoss(reduction='mean')
feats = self(batch_input, batch_input_lens, batch_mask)
batch_target = torch.cat(batch_target, 0).to(device)
return loss(feats, batch_target)
def predict(self, batch_input, batch_input_lens, batch_mask):
feats = self(batch_input, batch_input_lens, batch_mask)
val, pred = torch.max(feats, 1)
return pred
class CRF(nn.Module):
def __init__(self, tagset, start_tag, end_tag, device):
super(CRF, self).__init__()
self.tagset_size = len(tagset)
self.START_TAG_IDX = tagset.index(start_tag)
self.END_TAG_IDX = tagset.index(end_tag)
self.START_TAG_TENSOR = torch.LongTensor([self.START_TAG_IDX]).to(device)
self.END_TAG_TENSOR = torch.LongTensor([self.END_TAG_IDX]).to(device)
# trans: (tagset_size, tagset_size) trans (i, j) means state_i -> state_j
self.trans = nn.Parameter(
torch.randn(self.tagset_size, self.tagset_size)
)
# self.trans.data[...] = 1
self.trans.data[:, self.START_TAG_IDX] = -10000
self.trans.data[self.END_TAG_IDX, :] = -10000
self.device = device
def init_alpha(self, batch_size, tagset_size):
return torch.full((batch_size, tagset_size, 1), -10000, dtype=torch.float, device=self.device)
def init_path(self, size_shape):
# Initialization Path - LongTensor + Device + Full_value=0
return torch.full(size_shape, 0, dtype=torch.long, device=self.device)
def _iter_legal_batch(self, batch_input_lens, reverse=False):
index = torch.arange(0, batch_input_lens.sum(), dtype=torch.long)
packed_index = rnn_utils.pack_sequence(
torch.split(index, batch_input_lens.tolist())
)
batch_iter = torch.split(packed_index.data, packed_index.batch_sizes.tolist())
batch_iter = reversed(batch_iter) if reverse else batch_iter
for idx in batch_iter:
yield idx, idx.size()[0]
def score_z(self, feats, batch_input_lens):
# 模拟packed pad过程
tagset_size = feats.shape[1]
batch_size = len(batch_input_lens)
alpha = self.init_alpha(batch_size, tagset_size)
alpha[:, self.START_TAG_IDX, :] = 0 # Initialization
for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens):
feat = feats[legal_idx, ].view(legal_batch_size, 1, tagset_size) #
# #batch * 1 * |tag| + #batch * |tag| * 1 + |tag| * |tag| = #batch * |tag| * |tag|
legal_batch_score = feat + alpha[:legal_batch_size, ] + self.trans
alpha_new = torch.logsumexp(legal_batch_score, 1).unsqueeze(2).to(device)
alpha[:legal_batch_size, ] = alpha_new
alpha = alpha + self.trans[:, self.END_TAG_IDX].unsqueeze(1)
score = torch.logsumexp(alpha, 1).sum().to(device)
return score
def score_sentence(self, feats, batch_target):
# CRF Batched Sentence Score
# feats: (#batch_state(#words), tagset_size)
# batch_target: list<torch.LongTensor> At least One LongTensor
# Warning: words order = batch_target order
def _add_start_tag(target):
return torch.cat([self.START_TAG_TENSOR, target]).to(device)
def _add_end_tag(target):
return torch.cat([target, self.END_TAG_TENSOR]).to(device)
from_state = [_add_start_tag(target) for target in batch_target]
to_state = [_add_end_tag(target) for target in batch_target]
from_state = torch.cat(from_state).to(device)
to_state = torch.cat(to_state).to(device)
trans_score = self.trans[from_state, to_state]
gather_target = torch.cat(batch_target).view(-1, 1).to(device)
emit_score = torch.gather(feats, 1, gather_target).to(device)
return trans_score.sum() + emit_score.sum()
def viterbi(self, feats, batch_input_lens):
word_size, tagset_size = feats.shape
batch_size = len(batch_input_lens)
viterbi_path = self.init_path(feats.shape) # use feats.shape to init path.shape
alpha = self.init_alpha(batch_size, tagset_size)
alpha[:, self.START_TAG_IDX, :] = 0 # Initialization
for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens):
feat = feats[legal_idx, :].view(legal_batch_size, 1, tagset_size)
legal_batch_score = feat + alpha[:legal_batch_size, ] + self.trans
alpha_new, best_tag = torch.max(legal_batch_score, 1).to(device)
alpha[:legal_batch_size, ] = alpha_new.unsqueeze(2)
viterbi_path[legal_idx, ] = best_tag
alpha = alpha + self.trans[:, self.END_TAG_IDX].unsqueeze(1)
path_score, best_tag = torch.max(alpha, 1).to(device)
path_score = path_score.squeeze() # path_score=#batch
best_paths = self.init_path((word_size, 1))
for legal_idx, legal_batch_size in self._iter_legal_batch(batch_input_lens, reverse=True):
best_paths[legal_idx, ] = best_tag[:legal_batch_size, ] #
backword_path = viterbi_path[legal_idx, ] # 1 * |Tag|
this_tag = best_tag[:legal_batch_size, ] # 1 * |legal_batch_size|
backword_tag = torch.gather(backword_path, 1, this_tag).to(device)
best_tag[:legal_batch_size, ] = backword_tag
# never computing <START>
# best_paths = #words
return path_score.view(-1), best_paths.view(-1)
class BiLSTM_CRF(nn.Module):
def __init__(self, vocab_size, tagset, embedding_dim, hidden_dim,
num_layers, bidirectional, dropout, start_tag, end_tag, device, pretrained=None):
super(BiLSTM_CRF, self).__init__()
self.bilstm = BiLSTM(vocab_size, tagset, embedding_dim, hidden_dim,
num_layers, bidirectional, dropout, pretrained)
self.CRF = CRF(tagset, start_tag, end_tag, device)
def init_hidden(self, batch_size, device):
self.bilstm.hidden = self.bilstm.init_hidden(batch_size, device)
def forward(self, batch_input, batch_input_lens, batch_mask):
feats = self.bilstm(batch_input, batch_input_lens, batch_mask)
score, path = self.CRF.viterbi(feats, batch_input_lens)
return path
def neg_log_likelihood(self, batch_input, batch_input_lens, batch_mask, batch_target):
feats = self.bilstm(batch_input, batch_input_lens, batch_mask)
gold_score = self.CRF.score_sentence(feats, batch_target)
forward_score = self.CRF.score_z(feats, batch_input_lens)
return forward_score - gold_score
def predict(self, batch_input, batch_input_lens, batch_mask):
return self(batch_input, batch_input_lens, batch_mask)
训练单元:
def prepare_sequence(seq, to_ix, device):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long).to(device)
def prepare_labels(lab, tag_to_ix, device):
idxs = [tag_to_ix[w] for w in lab]
return torch.tensor(idxs, dtype=torch.long).to(device)
class PadSequence:
def __call__(self, batch):
device = torch.device('cuda')
# Let's assume that each element in "batch" is a tuple (data, label).
# Sort the batch in the descending order
sorted_batch = sorted(batch, key=lambda x: len(x[0]), reverse=True)
# Get each sequence and pad it
sequences = [x[0] for x in sorted_batch]
sentence_in =[prepare_sequence(x, word_to_ix, device) for x in sequences]
sequences_padded = torch.nn.utils.rnn.pad_sequence(sentence_in, padding_value = len(word_to_ix) +1, batch_first=True).to(device)
lengths = torch.LongTensor([len(x) for x in sequences]).to(device)
masks = [True if index_word!=len(word_to_ix)+1 else False for sentence in sequences_padded for index_word in sentence ]
labels = [x[1] for x in sorted_batch]
labels_in = [prepare_sequence(x, tag_to_ix, device) for x in labels]
return sequences_padded, lengths, masks, labels_in
{ .... code to get the data formatted...}
device = torch.device("cuda")
batch_size = 64
START_TAG = "<START>"
STOP_TAG = "<STOP>"
EMBEDDING_DIM = 200
HIDDEN_DIM = 20
NUM_LAYER = 3
BIDIRECTIONNAL = True
DROPOUT = 0.1
train_iter = DataLoader(dataset=training_data, collate_fn=PadSequence(), batch_size=64, shuffle=True)
model = BiLSTM_CRF(len(word_to_ix), tagset, EMBEDDING_DIM, HIDDEN_DIM, NUM_LAYER, BIDIRECTIONNAL, DROPOUT, START_TAG, STOP_TAG, device ).to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)
model.init_hidden(batch_size, device)
with tqdm(total=len(train_iter)) as progress_bar:
for batch_info in train_iter:
batch_input, batch_input_lens, batch_mask, batch_target = batch_info
loss_train = model.neg_log_likelihood(batch_input, batch_input_lens, batch_mask, batch_target)
optimizer.zero_grad()
loss_train.backward()
optimizer.step()
progress_bar.update(1) # update progress
最佳答案
在 PadSequence
函数(作为 collate_fn
收集样本并从中进行批处理)中,您明确地转换为 cuda
设备,即:
class PadSequence:
def __call__(self, batch):
device = torch.device('cuda')
# Left rest of the code for brevity
...
lengths = torch.LongTensor([len(x) for x in sequences]).to(device)
...
return sequences_padded, lengths, masks, labels_in
在创建批处理 时不需要转换数据,我们通常在通过神经网络推送示例之前就这样做。
device = torch.device('cuda' if torch.cuda.is_available() else "cpu")
或者甚至更好地在您设置所有内容的代码的某些部分为您/用户选择设备。
关于python - 带有 CUDA 的 Pytorch 在使用 pack_padded_sequence 时抛出 RuntimeError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68086528/
我想使用 li 和 ul 制作一个多级下拉列表,以便显示我博客中按年和月排序的所有文章。我希望我的下拉菜单看起来像 Google Blogspot 下拉菜单: 这是我的 CSS 和 HTML 代码 u
我在 Win 7 64 机器上将 CodeBlocks 与 gcc 4.7.2 和 gmp 5.0.5 结合使用。开始使用 gmpxx 后,我看到一个奇怪的段错误,它不会出现在 +、- 等运算符中,但
我正在使用 tern 为使用 CodeMirror 运行的窗口提供一些增强的智能感知,它工作正常,但我遇到了一个问题,我想添加一些自定义“types”,可以这么说,这样下拉列表中它们旁边就有图标了。我
我正在尝试让我的 PC 成为 Android 2.3.4 设备的 USB 主机,以便能够在不需要实际“附件”的情况下开发 API。为此,我需要将 PC 设置为 USB 主机和“设备”(在我的例子中是运
很难说出这里要问什么。这个问题模棱两可、含糊不清、不完整、过于宽泛或夸夸其谈,无法以目前的形式得到合理的回答。如需帮助澄清此问题以便重新打开,visit the help center . 关闭 9
我在设置服务器方面几乎是个新手,但遇到了一个问题。我有一个 Ubuntu 16.04 VPS 并安装了 Apache2 和 Tomcat7。我正在为 SSL 使用 LetsEncrypt 和 Cert
我在一个基于谷歌地图的项目上工作了超过 6 个月。我使用的是 Google Maps API V1 及其开发人员 API key 。当我尝试发布应用程序时,我了解到 Google API V1 已被弃
我是 Python 的新手,所以如果我对一些简单的事情感到困惑,请原谅。 我有一个这样的对象: class myObject(object): def __init__(self):
这个问题已经有答案了: How can I access object properties containing special characters? (2 个回答) 已关闭 9 年前。 我正在尝
我有下面的 CSS。我想要的是一种流体/液体(因为缺乏正确的术语)css。我正在为移动设备开发,当我改变模式时 从纵向 View 到陆地 View ,我希望它流畅。现在的图像 在陆地 View 中效
我正在尝试使用可以接受参数的缓存属性装饰器。 我查看了这个实现:http://www.daniweb.com/software-development/python/code/217241/a-cac
这个问题在这里已经有了答案: Understanding slicing (36 个答案) 关闭 6 年前。 以a = [1,2,3,4,5]为例。根据我的直觉,我认为 a[::-1] 与 a[0:
mysqldump -t -u root -p mytestdb mytable --where=datetime LIKE '2014-09%' 这就是我正在做的事情,它会返回: mysqldum
我正在制作销售税计算器,除了总支付金额部分外,其他一切都正常。在我的程序中,我希望能够输入一个数字并获得该项目的税额我还希望能够获得支付的总金额,包括交易中的税金。到目前为止,我编写的代码完成了所有这
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许在 Stack Overflow 上提出有关通用计算硬件和软件的问题。您可以编辑问题,使其成为
我是否必须进行任何额外的设置才能让 apache-airflow 在任务失败时向我发送电子邮件。我的配置文件中有以下内容(与默认值保持不变): [email] email_backend = airf
这个问题在这里已经有了答案: What does the $ symbol do in VBA? (5 个回答) 3年前关闭。 使用返回字符串(如 Left)的内置函数有什么区别吗?或使用与 $ 相同
我有一个用VB6编写的应用程序,我需要使用一个用.NET编写的库。有什么方法可以在我的应用程序上使用该库吗? 谢谢 最佳答案 这取决于。您可以控制.NET库吗? 如果是这样,则可以修改您的库,以便可以
当我创建一个以 ^ 开头的类方法时,我尝试调用它,它给了我一个错误。 class C { method ^test () { "Hi" } } dd C.new.test; Too m
我已经使用 bower 安装了 angularjs 和 materialjs。 凉亭安装 Angular Material 并将“ngMaterial”注入(inject)我的应用程序,但出现此错误。
我是一名优秀的程序员,十分优秀!