gpt4 book ai didi

python - 标记类似 TEI 的文本

转载 作者:行者123 更新时间:2023-11-28 19:03:41 25 4
gpt4 key购买 nike

我正在尝试使用 spaCy 来标记文本文档,其中命名实体包含在 XML 标记中。例如。 TEI -喜欢<personName>Harry</personName> goes to <orgName>Hogwarts</orgName> .

import spacy

nlp = spacy.load('en')
txt = '<personName>Harry</personName> goes to <orgName>Hogwarts</orgName>. <personName>Sally</personName> lives in <locationName>London</locationName>.'
doc = nlp(txt)
sents = list(doc.sents)
for i, s in enumerate(doc.sents):
print("{}: {}".format(i, s))

但是,XML 标记会导致句子拆分:

0: <personName>
1: Harry</personName> goes to <orgName>
2: Hogwarts</orgName>.
3: <personName>
4: Sally</personName> lives in <
5: locationName>
6: London</locationName>.

我怎么只能得到2个句子?我知道 spaCy 支持 custom tokenizer但由于文本的其余部分是标准的,我想继续使用内置的或者在其基础上构建以识别 XML 注释。

最佳答案

我已经设法通过对标记进行计数并跟踪每个标记具有哪些注释来做到这一点,这有点令人费解,但可以完成这项工作。

准备:

pattern = re.compile('</?[a-zA-Z_]+>')
pattern_start = re.compile('<[a-zA-Z_]+>')
pattern_end = re.compile('</[a-zA-Z_]+>')


# xml matches the pattern above
def annotate(xml):
if xml[1] == '/':
return (xml[2:-1] + '-end')
else:
return (xml[1:-1] + '-start')


nlp = spacy.load('en')
txt = '<personName>Harry Potter</personName> goes to \
<orgName>Hogwarts</orgName>. <personName>Sally</personName> \
lives in #<locationName>London</locationName>.'
words = txt.split()
stripped_words = []
# A mapping between token index and its annotations
annotations = {}
all_tokens = []
# A mapping between stripped_words index and whether it's preceded by a space
no_space = {}

现在让我们遍历单词并检查注释。我们将每个拆分为三个部分:前缀、标签和后缀。例如。对于 <orgName>@Hogwarts.</orgName>他们会是@ , Hogwarts , 和 . , 分别。

for i, w in enumerate(words):
matches = re.findall(pattern, w)
w_annotations = []
if len(matches) > 0:
for m in matches:
w_annotations.append(annotate(m))
splitted_start = re.split(pattern_start, w)
# TODO: we assume no word contains more than one annotation
if len(splitted_start) > 1:
prefix, rest = splitted_start
if len(prefix) > 0:
tokens = list(nlp(prefix))
all_tokens.extend(tokens)
# The prefix requires space before, but the tag itself not
no_space[len(stripped_words) + 1] = True
stripped_words.append(prefix)
else:
rest = splitted_start[0]
splitted_end = re.split(pattern_end, rest)
tag = splitted_end[0]
stripped_words.append(tag)
tokens = list(nlp(tag))
n_tokens = len(all_tokens)
for j, t in enumerate(tokens):
annotations[n_tokens + j] = w_annotations
all_tokens.extend(tokens)
if len(splitted_end) > 1:
suffix = splitted_end[1]
if len(suffix) > 0:
tokens = list(nlp(suffix))
all_tokens.extend(tokens)
no_space[len(stripped_words)] = True
stripped_words.append(suffix)
else:
stripped_words.append(w)
tokens = list(nlp(w))
all_tokens.extend(tokens)

最后,让我们打印带有注释的句子:

stripped_txt = stripped_words[0]
for i, w in enumerate(stripped_words[1:]):
if (i + 1) in no_space:
stripped_txt += w
else:
stripped_txt += ' ' + w

doc = nlp(stripped_txt)
n_tokens = 0
for i, s in enumerate(doc.sents):
print("sentence{}: {}".format(i, s))
for j, t in enumerate(list(s)):
if n_tokens in annotations:
anons = annotations[n_tokens]
else:
anons = []
print("\t token{}: {}, annotations: {}".format(n_tokens, t, anons))
n_tokens += 1

结果:

sentence0: Harry Potter goes to Hogwarts.
token0: Harry, annotations: ['personName-start']
token1: Potter, annotations: ['personName-end']
token2: goes, annotations: []
token3: to, annotations: []
token4: Hogwarts, annotations: ['orgName-start', 'orgName-end']
token5: ., annotations: []
sentence1: Sally lives in #London.
token6: Sally, annotations: ['personName-start', 'personName-end']
token7: lives, annotations: []
token8: in, annotations: []
token9: #, annotations: []
token10: London, annotations: ['locationName-start', 'locationName-end']
token11: ., annotations: []

完整代码: https://gist.github.com/dimidd/1aba8b57643d5936f42670f0c5f344e4

关于python - 标记类似 TEI 的文本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49733653/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com