gpt4 book ai didi

python spacy句子分割器

转载 作者:行者123 更新时间:2023-12-02 02:38:07 26 4
gpt4 key购买 nike

我想使用 spacy 从文本中获取句子。

nlp = English()  # just the language with no model
sentencizer = nlp.create_pipe("sentencizer")
nlp.add_pipe(sentencizer)
doc = nlp("This is a sentence. This is another sentence.")
for sent in doc.sents:
print(sent.text)

是否可以提高句子分割器绕过规则的可靠性,例如从不在“no”等首字母缩略词之后分割句子。

想象一下,我当然有一堆非常技术性和特殊的缩写词。
您将如何进行?

最佳答案

您可以编写一个自定义函数,通过使用基于规则的句子拆分方法来更改默认行为。例如:

import spacy

text = "The formula is no. 45. This num. represents the chemical properties."

nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
print("Before:", [sent.text for sent in doc.sents])

def set_custom_boundaries(doc):
pattern_a = ['no', 'num']
for token in doc[:-1]:
if token.text in pattern_a and doc[token.i + 1].text == '.':
doc[token.i + 2].is_sent_start = False
return doc

nlp.add_pipe(set_custom_boundaries, before="parser")
doc = nlp(text)
print("After:", [sent.text for sent in doc.sents])

这将为您提供所需的句子拆分。

Before: ['The formula is no.', '45.', 'This num.', 'represents the chemical properties.']
After: ['The formula is no. 45.', 'This num. represents the chemical properties.']

关于python spacy句子分割器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64029623/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com