gpt4 book ai didi

nlp - 如何自定义 spaCy 的分词器以排除正则表达式描述的拆分短语

转载 作者:行者123 更新时间:2023-12-05 06:26:35 25 4
gpt4 key购买 nike

例如,我希望分词器将“New York”分词为 ['New York'] 而不是默认的 ['New', 'York']。

文档建议在创建自定义分词器时添加正则表达式。

所以我做了以下事情:

import re
import spacy
from spacy.tokenizer import Tokenizer

target = re.compile(r'New York')

def custom_tokenizer(nlp):

dflt_prefix = nlp.Defaults.prefixes
dflt_suffix = nlp.Defaults.suffixes
dflt_infix = nlp.Defaults.infixes

prefix_re = spacy.util.compile_prefix_regex(dflt_prefix).search
suffix_re = spacy.util.compile_suffix_regex(dflt_suffix).search
infix_re = spacy.util.compile_infix_regex(dflt_infix).finditer

return Tokenizer(nlp.vocab, prefix_search=prefix_re,
suffix_search=suffix_re,
infix_finditer=infix_re,
token_match=target.match)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u"New York")
print([t.text for t in doc])

我使用了默认值,以便正常行为继续,除非函数目标(token_match 参数的参数)返回 true。

但我仍然得到 ['New', 'York']。任何帮助表示赞赏。

最佳答案

使用 PhraseMatcher组件来标识您要将其视为单个标记的短语。使用 doc.retokenize上下文管理器将您的短语中的标记合并为一个标记。最后将整个过程包裹在custom pipeline component中,并将该组件添加到您的语言模型中。

import spacy
from spacy.lang.en import English
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc

class MatchRetokenizeComponent:
def __init__(self, nlp, terms):
self.terms = terms
self.matcher = PhraseMatcher(nlp.vocab)
patterns = [nlp.make_doc(text) for text in terms]
self.matcher.add("TerminologyList", None, *patterns)
Doc.set_extension("phrase_matches", getter=self.matcher, force=True) # You should probably set Force=False

def __call__(self, doc):
matches = self.matcher(doc)
with doc.retokenize() as retokenizer:
for match_id, start, end in matches:
retokenizer.merge(doc[start:end], attrs={"LEMMA": str(doc[start:end])})
return doc

terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]

nlp = English()
retokenizer = MatchRetokenizeComponent(nlp, terms)
nlp.add_pipe(retokenizer, name='merge_phrases', last=True)

doc = nlp("German Chancellor Angela Merkel and US President Barack Obama "
"converse in the Oval Office inside the White House in Washington, D.C.")

[tok for tok in doc]

#[German,
# Chancellor,
# Angela Merkel,
# and,
# US,
# President,
# Barack Obama,
# converse,
# in,
# the,
# Oval,
# Office,
# inside,
# the,
# White,
# House,
# in,
# Washington, D.C.]

编辑:如果您最终尝试合并重叠跨度,PhraseMatcher 实际上会抛出错误。如果这是一个问题,您最好使用新的 EntityRuler , 它试图坚持最长的连续匹配。使用这样的实体让我们稍微简化我们的自定义管道组件:

class EntityRetokenizeComponent:
def __init__(self, nlp):
pass
def __call__(self, doc):
with doc.retokenize() as retokenizer:
for ent in doc.ents:
retokenizer.merge(doc[ent.start:ent.end], attrs={"LEMMA": str(doc[ent.start:ent.end])})
return doc


nlp = English()

ruler = EntityRuler(nlp)

# I don't care about the entity label, so I'm just going to call everything an "ORG"
ruler.add_patterns([{"label": "ORG", "pattern": term} for term in terms])
nlp.add_pipe(ruler)

retokenizer = EntityRetokenizeComponent(nlp)
nlp.add_pipe(retokenizer, name='merge_phrases')

关于nlp - 如何自定义 spaCy 的分词器以排除正则表达式描述的拆分短语,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55984036/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com