gpt4 book ai didi

python - spacy lemmatizer 如何工作?

转载 作者:太空狗 更新时间:2023-10-29 17:56:40 41 4
gpt4 key购买 nike

对于词形还原 spacy 有一个 lists of words :形容词、副词、动词...以及异常(exception)情况列表: adverbs_irreg... 对于常规的,有一组 rules

让我们以“更宽”这个词为例

因为它是一个形容词,所以词形还原的规则应该来自这个列表:

ADJECTIVE_RULES = [
["er", ""],
["est", ""],
["er", "e"],
["est", "e"]
]

据我了解,这个过程将是这样的:

1)获取单词的词性标签,知道它是名词还是动词...
2)如果该词在不规则案例列表中,如果没有应用任何规则,则直接替换。

现在,如何决定使用“er”->“e”而不是“er”->“”来获得“wide”而不是“wid”?

Here它可以被测试。

最佳答案

让我们从类定义开始:https://github.com/explosion/spaCy/blob/develop/spacy/lemmatizer.py

类(class)

它从初始化 3 个变量开始:

class Lemmatizer(object):
@classmethod
def load(cls, path, index=None, exc=None, rules=None):
return cls(index or {}, exc or {}, rules or {})

def __init__(self, index, exceptions, rules):
self.index = index
self.exc = exceptions
self.rules = rules

现在,看着 self.exc对于英语,我们看到它指向 https://github.com/explosion/spaCy/tree/develop/spacy/lang/en/lemmatizer/init.py它从目录 https://github.com/explosion/spaCy/tree/master/spacy/en/lemmatizer 加载文件的位置

为什么 Spacy 不直接读取文件?

很可能是因为在代码中声明字符串比通过 I/O 流式传输字符串更快。

这些索引、异常(exception)和规则从何而来?

仔细一看,好像都来自原始的普林斯顿WordNet https://wordnet.princeton.edu/man/wndb.5WN.html

规则

仔细看, https://github.com/explosion/spaCy/tree/develop/spacy/lang/en/lemmatizer/_lemma_rules.py上的规则类似于 _morphy规则来自 nltk https://github.com/nltk/nltk/blob/develop/nltk/corpus/reader/wordnet.py#L1749

而这些规则最初来自 Morphy软件 https://wordnet.princeton.edu/man/morphy.7WN.html

此外, spacy包含了一些不是来自普林斯顿莫菲的标点规则:
PUNCT_RULES = [
["“", "\""],
["”", "\""],
["\u2018", "'"],
["\u2019", "'"]
]

异常(exception)

至于异常,则存储在 *_irreg.py中。 spacy 中的文件,而且它们看起来也来自普林斯顿 Wordnet。

如果我们看一下原始 WordNet .exc 的一些镜像,就很明显了。 (排除)文件(例如 https://github.com/extjwnl/extjwnl-data-wn21/blob/master/src/main/resources/net/sf/extjwnl/data/wordnet/wn21/adj.exc ),如果您下载 wordnet包裹来自 nltk ,我们看到它是同一个列表:
alvas@ubi:~/nltk_data/corpora/wordnet$ ls
adj.exc cntlist.rev data.noun index.adv index.verb noun.exc
adv.exc data.adj data.verb index.noun lexnames README
citation.bib data.adv index.adj index.sense LICENSE verb.exc
alvas@ubi:~/nltk_data/corpora/wordnet$ wc -l adj.exc
1490 adj.exc

索引

如果我们看 spacy lemmatizer index ,我们看到它也来自 Wordnet,例如 https://github.com/explosion/spaCy/tree/develop/spacy/lang/en/lemmatizer/_adjectives.py以及 nltk 中重新分发的 wordnet 副本:
alvas@ubi:~/nltk_data/corpora/wordnet$ head -n40 data.adj 

1 This software and database is being provided to you, the LICENSEE, by
2 Princeton University under the following license. By obtaining, using
3 and/or copying this software and database, you agree that you have
4 read, understood, and will comply with these terms and conditions.:
5
6 Permission to use, copy, modify and distribute this software and
7 database and its documentation for any purpose and without fee or
8 royalty is hereby granted, provided that you agree to comply with
9 the following copyright notice and statements, including the disclaimer,
10 and that the same appear on ALL copies of the software, database and
11 documentation, including modifications that you make for internal
12 use or for distribution.
13
14 WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
15
16 THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
17 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
18 IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
19 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
20 ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE
21 OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT
22 INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR
23 OTHER RIGHTS.
24
25 The name of Princeton University or Princeton may not be used in
26 advertising or publicity pertaining to distribution of the software
27 and/or database. Title to copyright in this software, database and
28 any associated documentation shall at all times remain with
29 Princeton University and LICENSEE agrees to preserve same.
00001740 00 a 01 able 0 005 = 05200169 n 0000 = 05616246 n 0000 + 05616246 n 0101 + 05200169 n 0101 ! 00002098 a 0101 | (usually followed by `to') having the necessary means or skill or know-how or authority to do something; "able to swim"; "she was able to program her computer"; "we were at last able to buy a car"; "able to get a grant for the project"
00002098 00 a 01 unable 0 002 = 05200169 n 0000 ! 00001740 a 0101 | (usually followed by `to') not having the necessary means or skill or know-how; "unable to get to town without a car"; "unable to obtain funds"
00002312 00 a 02 abaxial 0 dorsal 4 002 ;c 06037666 n 0000 ! 00002527 a 0101 | facing away from the axis of an organ or organism; "the abaxial surface of a leaf is the underside or side facing away from the stem"
00002527 00 a 02 adaxial 0 ventral 4 002 ;c 06037666 n 0000 ! 00002312 a 0101 | nearest to or facing toward the axis of an organ or organism; "the upper side of a leaf is known as the adaxial surface"
00002730 00 a 01 acroscopic 0 002 ;c 06066555 n 0000 ! 00002843 a 0101 | facing or on the side toward the apex
00002843 00 a 01 basiscopic 0 002 ;c 06066555 n 0000 ! 00002730 a 0101 | facing or on the side toward the base
00002956 00 a 02 abducent 0 abducting 0 002 ;c 06080522 n 0000 ! 00003131 a 0101 | especially of muscles; drawing away from the midline of the body or from an adjacent part
00003131 00 a 03 adducent 0 adductive 0 adducting 0 003 ;c 06080522 n 0000 + 01449236 v 0201 ! 00002956 a 0101 | especially of muscles; bringing together or drawing toward the midline of the body or toward an adjacent part
00003356 00 a 01 nascent 0 005 + 07320302 n 0103 ! 00003939 a 0101 & 00003553 a 0000 & 00003700 a 0000 & 00003829 a 0000 | being born or beginning; "the nascent chicks"; "a nascent insurgency"
00003553 00 s 02 emergent 0 emerging 0 003 & 00003356 a 0000 + 02625016 v 0102 + 00050693 n 0101 | coming into existence; "an emergent republic"
00003700 00 s 01 dissilient 0 002 & 00003356 a 0000 + 07434782 n 0101 | bursting open with force, as do some ripe seed vessels

在此基础上,字典、异常(exception)和规则说明 spacy lemmatizer 的使用主要来自普林斯顿 WordNet 和他们的 Morphy 软件,我们可以继续看实际实现如何 spacy使用索引和异常(exception)应用规则。

我们回到 https://github.com/explosion/spaCy/blob/develop/spacy/lemmatizer.py

主要 Action 来自函数而不是 Lemmatizer类(class):
def lemmatize(string, index, exceptions, rules):
string = string.lower()
forms = []
# TODO: Is this correct? See discussion in Issue #435.
#if string in index:
# forms.append(string)
forms.extend(exceptions.get(string, []))
oov_forms = []
for old, new in rules:
if string.endswith(old):
form = string[:len(string) - len(old)] + new
if not form:
pass
elif form in index or not form.isalpha():
forms.append(form)
else:
oov_forms.append(form)
if not forms:
forms.extend(oov_forms)
if not forms:
forms.append(string)
return set(forms)

为什么是 lemmatize Lemmatizer 之外的方法课?

我不确定,但也许是为了确保可以在类实例之外调用词形还原函数,但考虑到 @staticmethod and @classmethod 关于函数和类为什么已经解耦可能还有其他考虑

莫菲 vs 史派西

比较 spacy lemmatize() 函数针对 morphy() nltk 中的函数(最初来自 http://blog.osteele.com/2004/04/pywordnet-20/ 创建于十多年前), morphy() ,Oliver Steele 的 WordNet morphy 的 Python 移植主要流程是:
  • 检查异常(exception)列表
  • 对输入应用一次规则以获得 y1、y2、y3 等。
  • 返回数据库中的所有内容(并检查原始内容)
  • 如果没有匹配项,继续应用规则直到我们找到匹配项
  • 如果我们找不到任何东西,则返回一个空列表

  • 对于 spacy ,鉴于 TODO,它可能仍在开发中在线 https://github.com/explosion/spaCy/blob/develop/spacy/lemmatizer.py#L76

    但是大体流程好像是:
  • 寻找异常(exception),如果词在异常(exception)列表中,则从异常(exception)列表中获取它们。
  • 应用规则
  • 保存索引列表中的那些
  • 如果步骤 1-3 中没有引理,则只需跟踪词表外词 (OOV) 并将原始字符串附加到引理形式
  • 返回引理形式

  • 在 OOV 处理方面,如果没有找到词形还原形式,spacy 将返回原始字符串,在这方面, nltk morphy 的实现做同样的事情,例如
    >>> from nltk.stem import WordNetLemmatizer
    >>> wnl = WordNetLemmatizer()
    >>> wnl.lemmatize('alvations')
    'alvations'

    在词形还原之前检查不定式

    另一个不同点可能是 morphyspacy决定分配给这个词的词性。在这方面, spacy puts some linguistics rule in the Lemmatizer() to decide whether a word is the base form and skips the lemmatization entirely if the word is already in the infinitive form (is_base_form()) ,如果要对语料库中的所有单词进行词形还原,并且其中很大一部分是不定式(已经是词素形式),这将节省很多。

    但这在 spacy 中是可能的因为它允许 lemmatizer 访问与某些形态规则密切相关的 POS。而对于 morphy尽管可以使用细粒度的 PTB POS 标签找出一些形态,但仍然需要一些努力来对它们进行分类以了解哪些形式是不定式的。

    一般而言,词性标签中需要梳理出形态特征的 3 个主要信号:
  • 号码
  • 性别


  • 更新

    在最初的回答(17 年 5 月 12 日)之后,SpaCy 确实对他们的 lemmatizer 进行了更改。我认为目的是在没有查找和规则处理的情况下使词形还原更快。

    因此,他们预先词形化单词并将它们留在查找哈希表中,以便对他们预先词形化的单词进行检索 O(1) https://github.com/explosion/spaCy/blob/master/spacy/lang/en/lemmatizer/lookup.py

    此外,为了统一跨语言的词形还原器,词形还原器现在位于 https://github.com/explosion/spaCy/blob/develop/spacy/lemmatizer.py#L92

    但是上面讨论的基本词形还原步骤仍然与当前的 spacy 版本( 4d2d7d586608ddc0bcb2857fb3c2d0d4c151ebfc )相关

    结语

    我想现在我们知道它适用于语言学规则等等,另一个问题是 “是否有任何非基于规则的词形还原方法?”

    但在回答之前的问题之前,“引理到底是什么?”可能更好的问题要问。

    关于python - spacy lemmatizer 如何工作?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43795249/

    41 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com