gpt4 book ai didi

python - 将标记器添加到空白英语 spacy 管道

转载 作者:行者123 更新时间:2023-12-03 08:18:27 29 4
gpt4 key购买 nike

我很难弄清楚如何从 spacy V3 中的内置模型一点一点地组装 spacy 管道。我已经下载了 en_core_web_sm 模型,并且可以使用 nlp = spacy.load("en_core_web_sm") 加载它。像这样处理示例文本就可以了。

现在我想要的是从空白开始构建一个英语管道并一点一点地添加组件。我不想想要加载整个en_core_web_sm管道并排除组件。为了具体起见,假设我只想要管道中的 spacy 默认标记器。 documentation向我建议

import spacy

from spacy.pipeline.tagger import DEFAULT_TAGGER_MODEL
config = {"model": DEFAULT_TAGGER_MODEL}

nlp = spacy.blank("en")
nlp.add_pipe("tagger", config=config)
nlp("This is some sample text.")

应该可以工作。但是我收到与 hashembed 相关的错误:

Traceback (most recent call last):
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/language.py", line 1000, in __call__
doc = proc(doc, **component_cfg.get(name, {}))
File "spacy/pipeline/trainable_pipe.pyx", line 56, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/util.py", line 1507, in raise_error
raise e
File "spacy/pipeline/trainable_pipe.pyx", line 52, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__
File "spacy/pipeline/tagger.pyx", line 111, in spacy.pipeline.tagger.Tagger.predict
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 315, in predict
return self._func(self, X, is_train=False)[0]
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/chain.py", line 54, in forward
Y, inc_layer_grad = layer(X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/chain.py", line 54, in forward
Y, inc_layer_grad = layer(X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/chain.py", line 54, in forward
Y, inc_layer_grad = layer(X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/with_array.py", line 30, in forward
return _ragged_forward(
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/with_array.py", line 90, in _ragged_forward
Y, get_dX = layer(Xr.dataXd, is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/concatenate.py", line 44, in forward
Ys, callbacks = zip(*[layer(X, is_train=is_train) for layer in model.layers])
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/concatenate.py", line 44, in <listcomp>
Ys, callbacks = zip(*[layer(X, is_train=is_train) for layer in model.layers])
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/chain.py", line 54, in forward
Y, inc_layer_grad = layer(X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 291, in __call__
return self._func(self, X, is_train=is_train)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/layers/hashembed.py", line 61, in forward
vectors = cast(Floats2d, model.get_param("E"))
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/model.py", line 216, in get_param
raise KeyError(
KeyError: "Parameter 'E' for model 'hashembed' has not been allocated yet."


The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-8e2b4cf9fd33>", line 8, in <module>
nlp("This is some sample text.")
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/language.py", line 1003, in __call__
raise ValueError(Errors.E109.format(name=name)) from e
ValueError: [E109] Component 'tagger' could not be run. Did you forget to call `initialize()`?

暗示我应该运行initialize()。好的。如果我然后运行 ​​nlp.initialize() 我终于得到这个错误

Traceback (most recent call last):
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-eeec225a68df>", line 1, in <module>
nlp.initialize()
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/language.py", line 1273, in initialize
proc.initialize(get_examples, nlp=self, **p_settings)
File "spacy/pipeline/tagger.pyx", line 271, in spacy.pipeline.tagger.Tagger.initialize
File "spacy/pipeline/pipe.pyx", line 104, in spacy.pipeline.pipe.Pipe._require_labels
ValueError: [E143] Labels for component 'tagger' not initialized. This can be fixed by calling add_label, or by providing a representative batch of examples to the component's `initialize` method.

现在我有点不知所措。哪些标签示例?我从哪里拿它们?为什么默认模型配置不处理这个问题?我是否必须以某种方式告诉 spacy 使用 en_core_web_sm ?如果是这样,我该如何在不使用 spacy.load("en_core_web_sm") 并排除一大堆东西的情况下做到这一点?感谢您的提示!

编辑:理想情况下,我希望能够从修改后的配置文件中仅加载管道的一部分,例如nlp = English.from_config(config)。我什至无法使用 en_core_web_sm 附带的配置文件,因为生成的管道也需要初始化,并且在 nlp.initialize() 上我现在收到

Traceback (most recent call last):
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-67-eeec225a68df>", line 1, in <module>
nlp.initialize()
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/language.py", line 1246, in initialize
I = registry.resolve(config["initialize"], schema=ConfigSchemaInit)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/config.py", line 727, in resolve
resolved, _ = cls._make(
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/config.py", line 776, in _make
filled, _, resolved = cls._fill(
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/thinc/config.py", line 848, in _fill
getter_result = getter(*args, **kwargs)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/language.py", line 98, in load_lookups_data
lookups = load_lookups(lang=lang, tables=tables)
File "/home/valentin/miniconda3/envs/eval/lib/python3.8/site-packages/spacy/lookups.py", line 30, in load_lookups
raise ValueError(Errors.E955.format(table=", ".join(tables), lang=lang))
ValueError: [E955] Can't find table(s) lexeme_norm for language 'en' in spacy-lookups-data. Make sure you have the package installed or provide your own lookup tables if no default lookups are available for your language.

暗示它没有找到所需的查找表。

最佳答案

nlp.add_pipe("tagger") 添加一个新的空白/未初始化的标记器,而不是来自 en_core_web_sm 或任何其他预训练管道的标记器。如果您以这种方式添加标记器,则需要对其进行初始化和训练,然后才能使用它。

您可以使用 source 选项从现有管道添加组件:

nlp = spacy.add_pipe("tagger", source=spacy.load("en_core_web_sm"))

也就是说,来自 spacy.blank("en") 的标记化可能与源管道中标记器的训练内容不同。一般来说(特别是一旦你不再使用 spacy 的预训练管道),你还应该确保分词器设置是相同的,在排除组件的同时加载是一种简单的方法。

或者,除了对 scispacy 的 en_core_sci_sm 等模型使用 nlp.add_pipe(source=) 之外,您还可以复制分词器设置,这是管道的一个很好的示例标记化与 spacy.blank("en") 不同:

nlp = spacy.blank("en")
source_nlp = spacy.load("en_core_sci_sm")
nlp.tokenizer.from_bytes(source_nlp.tokenizer.to_bytes())
nlp.add_pipe("tagger", source=source_nlp)

关于python - 将标记器添加到空白英语 spacy 管道,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68618759/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com