- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
此训练代码基于 run_glue.py
脚本found here :
# Set the seed value all over the place to make this reproducible.
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store the average loss after each epoch so we can plot them.
loss_values = []
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 100 batches.
if step % 100 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# This will return the loss (rather than the model output) because we
# have provided the `labels`.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# The call to `model` always returns a tuple, so we need to pull the
# loss value out of the tuple.
loss = outputs[0]
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over the training data.
avg_train_loss = total_loss / len(train_dataloader)
# Store the loss value for plotting the learning curve.
loss_values.append(avg_train_loss)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(format_time(time.time() - t0)))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions.
# This will return the logits rather than the loss because we have
# not provided labels.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# Get the "logits" output by the model. The "logits" are the output
# values prior to applying an activation function like the softmax.
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences.
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
# Accumulate the total accuracy.
eval_accuracy += tmp_eval_accuracy
# Track the number of batches
nb_eval_steps += 1
# Report the final accuracy for this validation run.
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print(" Validation took: {:}".format(format_time(time.time() - t0)))
print("")
print("Training complete!")
错误如下,在使用 bert 模型运行文本分类训练时遇到了以下错误。
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
我该如何解决?
最佳答案
我认为您搞砸了声明的输入维度 torch.nn.Embedding
并根据您的意见。 torch.nn.Embedding
是 simple lookup table that stores embeddings of a fixed dictionary and size .
任何小于零或大于声明的输入维度的输入都会引发此错误。
比较您的输入和 torch.nn.Embedding
中提到的维度.
附加的代码片段来模拟问题。
from torch import nn
input_dim = 10
embedding_dim = 2
embedding = nn.Embedding(input_dim, embedding_dim)
err = True
if err:
#Any input more than input_dim - 1, here input_dim = 10
#Any input less than zero
input_to_embed = torch.tensor([10])
else:
input_to_embed = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
embed = embedding(input_to_embed)
print(embed)
希望这能解决您的问题。
关于python - Pytorch:IndexError:索引超出自身范围。怎么解决?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62081155/
我不能解决这个问题。和标题说的差不多…… 如果其他两个范围/列中有“否”,我如何获得范围或列的平均值? 换句话说,我想计算 A 列的平均值,并且我有两列询问是/否问题(B 列和 C 列)。我只希望 B
我知道 python 2to3 将所有 xrange 更改为 range 我没有发现任何问题。我的问题是关于它如何将 range(...) 更改为 list(range(...)) :它是愚蠢的,只是
我有一个 Primefaces JSF 项目,并且我的 Bean 注释有以下内容: @Named("reportTabBean") @SessionScoped public class Report
在 rails3 中,我在模型中制作了相同的范围。例如 class Common ?" , at) } end 我想将公共(public)范围拆分为 lib 中的模块。所以我试试这个。 module
我需要在另一个 View 范围 bean 中使用保存在 View 范围 bean 中的一些数据。 @ManagedBean @ViewScoped public class Attivita impl
为什么下面的代码输出4?谁能给我推荐一篇好文章来深入学习 javascript 范围。 这段代码返回4,但我不明白为什么? (function f(){ return f(); functio
我有一个与此结构类似的脚本 $(function(){ var someVariable; function doSomething(){ //here } $('#som
我刚刚开始学习 Jquery,但这些示例对我帮助不大...... 现在,以下代码发生的情况是,我有 4 个表单,我使用每个表单的链接在它们之间进行切换。但我不知道如何在第一个函数中获取变量“postO
为什么当我这样做时: function Dog(){ this.firstName = 'scrappy'; } Dog.firstName 未定义? 但是我可以这样做: Dog.firstNa
我想打印文本文件 text.txt 的选定部分,其中包含: tickme 1.1(no.3) lesson1-bases lesson2-advancedfurther para:using the
我正在编写一些 JavaScript 代码。我对这个关键字有点困惑。如何在 dataReceivedHandler 函数中访问 logger 变量? MyClass: { logger: nu
我有这个代码: Public Sub test() Dim Tgt As Range Set Tgt = Range("A1") End Sub 我想更改当前为“A1”的 Tgt 的引
我正忙于此工作,以为我会把它放在我们那里。 该数字必须是最多3个单位和最多5个小数位的数字,等等。 有效的 999.99999 99.9 9 0.99999 0 无效的 -0.1 999.123456
覆盖代码时: @Override public void open(ExecutionContext executionContext) { super.open(executio
我想使用 preg_match 来匹配数字 1 - 21。我如何使用 preg_match 来做到这一点?如果数字大于 21,我不想匹配任何东西。 example preg_match('([0-9]
根据docs range函数有四种形式: (range) 0 - 无穷大 (range end) 0 - 结束 (range start end)开始 - 结束 (range start end st
我知道有一个UISlider,但是有人已经制作了RangeSlider(用两个拇指吗?)或者知道如何扩展 uislider? 最佳答案 我认为你不能直接扩展 UISlider,你可能需要扩展 UICo
我正在尝试将范围转换为列表。 nums = [] for x in range (9000, 9004): nums.append(x) print nums 输出 [9000] [9
请注意:此问题是由于在运行我的修饰方法时使用了GraphQL解析器。这意味着this的范围为undefined。但是,该问题的基础知识对于装饰者遇到问题的任何人都是有用的。 这是我想使用的基本装饰器(
我正在尝试创建一个工具来从网页上抓取信息(是的,我有权限)。 到目前为止,我一直在使用 Node.js 结合 requests 和 Cheerio 来拉取页面,然后根据 CSS 选择器查找信息。我已经
我是一名优秀的程序员,十分优秀!