gpt4 book ai didi

python - 分词器改变词汇条目

转载 作者:行者123 更新时间:2023-12-04 14:01:45 25 4
gpt4 key购买 nike

我有一些文本想对其执行 NLP。为此,我下载了一个预训练的分词器,如下所示:

import transformers as ts

pr_tokenizer = ts.AutoTokenizer.from_pretrained('distilbert-base-uncased', cache_dir='tmp')

然后我用我的数据创建自己的分词器,如下所示:

from tokenizers import Tokenizer
from tokenizers.models import BPE
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))

from tokenizers.trainers import BpeTrainer
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])

from tokenizers.pre_tokenizers import Whitespace
tokenizer.pre_tokenizer = Whitespace()

tokenizer.train(['transcripts.raw'], trainer)

现在到了让我感到困惑的部分......我需要更新预翻译分词器 (pr_tokenizer) 中的条目,它们的键与我的分词器 () 中的相同分词器)。我尝试了几种方法,这里是其中一种:

new_vocab = pr_tokenizer.vocab
v = tokenizer.get_vocab()

for i in v:
if i in new_vocab:
new_vocab[i] = v[i]

那我现在该怎么办?我在想类似的事情:

pr_tokenizer.vocab.update(new_vocab)

pr_tokenizer.vocab = new_vocab

都没有用。有谁知道这样做的好方法吗?

最佳答案

为此,您只需从 GitHub 或 HuggingFace website 下载分词器源代码即可。放入与您的代码相同的文件夹中,然后在加载分词器之前编辑词汇表:

new_vocab = {}

# Getting the vocabulary entries
for i, row in enumerate(open('./distilbert-base-uncased/vocab.txt', 'r')):
new_vocab[row[:-1]] = i

# your vocabulary entries
v = tokenizer.get_vocab()

# replace common (your code)
for i in v:
if i in new_vocab:
new_vocab[i] = v[i]

with open('./distilbert-base-uncased/vocabb.txt', 'w') as f:
# reversed vocabulary
rev_vocab = {j:i for i,j in zip(new_vocab.keys(), new_vocab.values())}
# adding vocabulary entries to file
for i in range(len(rev_vocab)):
if i not in rev_vocab: continue
f.write(rev_vocab[i] + '\n')

# loading the new tokenizer
pr_tokenizer = ts.AutoTokenizer.from_pretrained('./distilbert-base-uncased')

关于python - 分词器改变词汇条目,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69780823/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com