gpt4 book ai didi

python - HuggingFace 用于日本分词器

转载 作者:行者123 更新时间:2023-12-04 09:30:48 27 4
gpt4 key购买 nike

我最近根据源代码对以下代码进行了测试:
https://github.com/cl-tohoku/bert-japanese/blob/master/masked_lm_example.ipynb

import torch 
from transformers.tokenization_bert_japanese import BertJapaneseTokenizer
from transformers.modeling_bert import BertForMaskedLM

tokenizer = BertJapaneseTokenizer.from_pretrained('cl-tohoku/bert-base-japanese-whole-word-masking')
model = BertForMaskedLM.from_pretrained('cl-tohoku/bert-base-japanese-whole-word-masking')

input_ids = tokenizer.encode(f'''
青葉山で{tokenizer.mask_token}の研究をしています。
''', return_tensors='pt')
当我尝试对其进行编码时,我收到错误,例如:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-f8582275f4db> in <module>
1 input_ids = tokenizer.encode(f'''
2 青葉山で{tokenizer.mask_token}の研究をしています。
----> 3 ''', return_tensors='pt')

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, return_tensors, **kwargs)
1428 stride=stride,
1429 return_tensors=return_tensors,
-> 1430 **kwargs,
1431 )
1432

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
1740 return_length=return_length,
1741 verbose=verbose,
-> 1742 **kwargs,
1743 )
1744

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
452 )
453
--> 454 first_ids = get_input_ids(text)
455 second_ids = get_input_ids(text_pair) if text_pair is not None else None
456

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils.py in get_input_ids(text)
423 def get_input_ids(text):
424 if isinstance(text, str):
--> 425 tokens = self.tokenize(text, **kwargs)
426 return self.convert_tokens_to_ids(tokens)
427 elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs)
362
363 no_split_token = self.unique_no_split_tokens
--> 364 tokenized_text = split_on_tokens(no_split_token, text)
365 return tokenized_text
366

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text)
356 (
357 self._tokenize(token) if token not in self.unique_no_split_tokens else [token]
--> 358 for token in tokenized_text
359 )
360 )

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_utils.py in <genexpr>(.0)
356 (
357 self._tokenize(token) if token not in self.unique_no_split_tokens else [token]
--> 358 for token in tokenized_text
359 )
360 )

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_bert_japanese.py in _tokenize(self, text)
153 def _tokenize(self, text):
154 if self.do_word_tokenize:
--> 155 tokens = self.word_tokenizer.tokenize(text, never_split=self.all_special_tokens)
156 else:
157 tokens = [text]

~/.pyenv/versions/3.7.0/envs/personal/lib/python3.7/site-packages/transformers/tokenization_bert_japanese.py in tokenize(self, text, never_split, **kwargs)
205 break
206
--> 207 token, _ = line.split("\t")
208 token_start = text.index(token, cursor)
209 token_end = token_start + len(token)

ValueError: too many values to unpack (expected 2)
有没有人经历过这种情况?我尝试了很多不同的方法并引用了很多帖子,但都使用相同的方法并且没有解释,我只是想测试多种语言,其他语言似乎可以正常工作,但不能与日语一起使用,我不知道为什么。

最佳答案

我是 mecab-python3 维护者。 Transformers 依赖于 1.0 之前版本中的捆绑字典,该字典已被删除,因为它很旧。我很快就会在一个版本中添加它作为一个选项,但同时你可以安装一个旧版本。
vivasra 发布的命令不起作用,因为它指定了一个不存在的不同包的版本(注意包名中没有“3”)。你可以使用这个:

pip install mecab-python3=0.996.5
如果您仍然遇到问题,请打开一个问题 on Github .

关于python - HuggingFace 用于日本分词器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62860717/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com