gpt4 book ai didi

huggingface-transformers - 将新 token 添加到 BERT/RoBERTa,同时保留相邻 token 的 token 化

转载 作者:行者123 更新时间:2023-12-05 03:33:55 24 4
gpt4 key购买 nike

我正在尝试向 BERT 和 RoBERTa 标记器添加一些新标记,以便我可以根据新词微调模型。这个想法是用新词在一组有限的句子上微调模型,然后看看它在其他不同的上下文中对这个词的预测是什么,以检查模型对语言某些属性的知识状态。

为了做到这一点,我想添加新标记并将它们基本上视为新的普通词(模型还没有碰巧遇到过)。它们在添加后应该表现得与普通单词完全一样,除了它们的嵌入矩阵将被随机初始化,然后在微调期间学习。

但是,我在执行此操作时遇到了一些问题。特别是,在 BERT 的情况下,使用 do_basic_tokenize=False 初始化分词器时,围绕新添加的 token 的 token 的行为不符合预期(在 RoBERTa 的情况下,更改此设置似乎不会影响此处示例中的输出)。可以在以下示例中观察到该问题;在 BERT 的情况下,新添加的标记后的句点未被标记为子词(即,它被标记为 . 而不是预期的 ##.) , 在 RoBERTa 的情况下,新添加的子词后面的词被视为没有前面的空格(即,它被标记为 a 而不是 Ġa.

from transformers import BertTokenizer, RobertaTokenizer

new_word = 'mynewword'
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize = False)
bert.tokenize('mynewword') # does not exist yet
# ['my', '##ne', '##w', '##word']
bert.tokenize('testing.')
# ['testing', '##.']

bert.add_tokens(new_word)
bert.tokenize('mynewword') # now it does
# ['mynewword']
bert.tokenize('mynewword.')
# ['mynewword', '.']

roberta = RobertaTokenizer.from_pretrained('roberta-base', do_basic_tokenize = False)
roberta.tokenize('mynewword') # does not exist yet
# ['my', 'new', 'word']
roberta.tokenize('A testing a')
# ['A', 'Ġtesting', 'Ġa']

roberta.add_tokens(new_word)
roberta.tokenize('mynewword') # now it does
# ['mynewword']
roberta.tokenize('A mynewword a')
# ['A', 'mynewword', 'a']

有没有办法让我在添加新 token 的同时让周围 token 的行为与没有添加 token 时的行为相匹配?我觉得这很重要,因为模型最终可能会了解到(例如)新标记可以出现在 . 之前,而大多数其他标记只能出现在 ##. 之前这似乎会影响它的概括方式。此外,我可以在这里启用基本标记化来解决 BERT 问题,但这并不能真正反射(reflect)模型知识的完整状态,因为它破坏了不同标记之间的区别。这对 RoBERTa 问题没有帮助,无论如何它仍然存在。

此外,理想情况下,我能够将 RoBERTa 标记添加为 Ġmynewword,但我假设只要它永远不会作为句子中的第一个单词出现,就应该没关系。

最佳答案

在继续尝试解决这个问题之后,我似乎找到了一些可能有用的东西。它不一定是通用的,但可以从词汇文件(+ RoBERTa 的合并文件)加载分词器。如果您手动编辑这些文件以正确的方式添加新 token ,一切似乎都按预期工作。这是 BERT 的示例:

from transformers import BertTokenizer

bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize=False)
bert.tokenize('testing.') # ['testing', '##.']
bert.tokenize('mynewword') # ['my', '##ne', '##w', '##word']

bert_vocab = bert.get_vocab() # get the pretrained tokenizer's vocabulary
bert_vocab.update({'mynewword' : len(bert_vocab)}) # add the new word to the end

with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
tmp_vocab_file.write('\n'.join(bert_vocab))

new_bert = BertTokenizer(name_or_path = 'bert-base-uncased', vocab_file = 'vocab.tmp', do_basic_tokenize=False)
new_bert.max_model_length = 512 # for identity to this setting on the pretrained one

new_bert.tokenize('mynewword') # ['mynewword']
new_bert.tokenize('mynewword.') # ['mynewword', '##.']

import os
os.remove('vocab.tmp') # cleanup

RoBERTa 更难,因为我们还必须将这些对添加到 merges.txt。我有一种适用于新标记的方法,但不幸的是它会影响作为新标记子部分的单词的标记化,所以它并不完美——如果有人用它来添加组成的单词(就像我使用的那样case),你可以只选择不太可能导致问题的字符串(不像这里的例子'mynewword'),但在其他情况下它很可能会导致问题。 (虽然这不是一个完美的解决方案,但希望它能让其他人看到更好的解决方案。)

import re
import json
import requests
from transformers import RobertaTokenizer

roberta = RobertaTokenizer.from_pretrained('roberta-base')
roberta.tokenize('testing a') # ['testing', 'Ġa']
roberta.tokenize('mynewword') # ['my', 'new', 'word']

# update the vocabulary with the new token and the 'Ġ'' version
roberta_vocab = roberta.get_vocab()
roberta_vocab.update({'mynewword' : len(roberta_vocab)})
roberta_vocab.update({chr(288) + 'mynewword' : len(roberta_vocab)}) # chr(288) = 'Ġ'
with open('vocab.tmp', 'w', encoding = 'utf-8') as tmp_vocab_file:
json.dump(roberta_vocab, tmp_vocab_file, ensure_ascii=False)

# get and modify the merges file so that the new token will always be tokenized as a single word
url = 'https://huggingface.co/roberta-base/resolve/main/merges.txt'
roberta_merges = requests.get(url).content.decode().split('\n')

# this is a helper function to loop through a list of new tokens and get the byte-pair encodings
# such that the new token will be treated as a single unit always
def get_roberta_merges_for_new_tokens(new_tokens):
merges = [gen_roberta_pairs(new_token) for new_token in new_tokens]
merges = [pair for token in merges for pair in token]
return merges

def gen_roberta_pairs(new_token, highest = True):
# highest is used to determine whether we are dealing with the Ġ version or not.
# we add those pairs at the end, which is only if highest = True

# this is the hard part...
chrs = [c for c in new_token] # list of characters in the new token, which we will recursively iterate through to find the BPEs

# the simplest case: add one pair
if len(chrs) == 2:
if not highest:
return tuple([chrs[0], chrs[1]])
else:
return [' '.join([chrs[0], chrs[1]])]

# add the tokenization of the first letter plus the other two letters as an already merged pair
if len(chrs) == 3:
if not highest:
return tuple([chrs[0], ''.join(chrs[1:])])
else:
return gen_roberta_pairs(chrs[1:]) + [' '.join([chrs[0], ''.join(chrs[1:])])]

if len(chrs) % 2 == 0:
pairs = gen_roberta_pairs(''.join(chrs[:-2]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += tuple([''.join(chrs[:-2]), ''.join(chrs[-2:])])
if not highest:
return pairs
else:
# for new tokens with odd numbers of characters, we need to add the final two tokens before the
# third-to-last token
pairs = gen_roberta_pairs(''.join(chrs[:-3]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-2:]), highest = False)
pairs += gen_roberta_pairs(''.join(chrs[-3:]), highest = False)
pairs += tuple([''.join(chrs[:-3]), ''.join(chrs[-3:])])
if not highest:
return pairs

pairs = tuple(zip(pairs[::2], pairs[1::2]))
pairs = [' '.join(pair) for pair in pairs]

# pairs with the preceding special token
g_pairs = []
for pair in pairs:
if re.search(r'^' + ''.join(pair.split(' ')), new_token):
g_pairs.append(chr(288) + pair)

pairs = g_pairs + pairs
pairs = [chr(288) + ' ' + new_token[0]] + pairs

pairs = list(dict.fromkeys(pairs)) # remove any duplicates

return pairs

# first line of this file is a comment; add the new pairs after it
roberta_merges = roberta_merges[:1] + get_roberta_merges_for_new_tokens(['mynewword']) + roberta_merges[1:]
roberta_merges = list(dict.fromkeys(roberta_merges))
with open('merges.tmp', 'w', encoding = 'utf-8') as tmp_merges_file:
tmp_merges_file.write('\n'.join(roberta_merges))

new_roberta = RobertaTokenizer(name_or_path='roberta-base', vocab_file='vocab.tmp', merges_file='merges.tmp')

# for some reason, we have to re-add the <mask> token to roberta if we are using it, since
# loading the tokenizer from a file will cause it to be tokenized as separate parts
# the weight matrix is identical, and once re-added, a fill-mask pipeline still identifies
# the mask token correctly (not shown here)
new_roberta.add_tokens(new_roberta.mask_token, special_tokens=True)
new_roberta.model_max_length = 512

new_roberta.tokenize('mynewword') # ['mynewword']
new_roberta.tokenize('mynewword a') # ['mynewword', 'Ġa']
new_roberta.tokenize(' mynewword') # ['Ġmynewword']

# however, this does not guarantee that tokenization of other words will not be affected
roberta.tokenize('mynew') # ['my', 'new']
new_roberta.tokenize('mynew') # ['myne', 'w']

import os
os.remove('vocab.tmp')
os.remove('merges.tmp') # cleanup

关于huggingface-transformers - 将新 token 添加到 BERT/RoBERTa,同时保留相邻 token 的 token 化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70255025/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com