>> "lov" 显然,我需要做的就是a small tweak到雪球词干分析器: And to put the endings in-6ren">
gpt4 book ai didi

python - 如何使用 nltk.stem.snowball 阻止 Shakespere/KJV

转载 作者:太空宇宙 更新时间:2023-11-04 05:35:37 24 4
gpt4 key购买 nike

我想截取早期现代英语文本:

sb.stem("loveth")
>>> "lov"

显然,我需要做的就是a small tweak到雪球词干分析器:

And to put the endings into the English stemmer, the list

ed edly ing ingly

步骤 1b 应扩展为

ed edly ing ingly est eth

就 Snowball 脚本而言,必须添加结尾“est”“eth”以防止结尾“ing”。

太好了,所以我只需更改变量即可。也许添加一条特殊规则来处理“thee”/“thou”/“you”和“shalt”/“shall”。 NLTK documentation将变量显示为:

class nltk.stem.snowball.EnglishStemmer(ignore_stopwords=False)

Bases: nltk.stem.snowball._StandardStemmer

The English Snowball stemmer.

Variables:

__vowels – The English vowels.

__double_consonants – The English double consonants.

__li_ending – Letters that may directly appear before a word final ‘li’.

__step0_suffixes – Suffixes to be deleted in step 0 of the algorithm.

__step1a_suffixes – Suffixes to be deleted in step 1a of the algorithm.

__step1b_suffixes – Suffixes to be deleted in step 1b of the algorithm. (Here we go)

__step2_suffixes – Suffixes to be deleted in step 2 of the algorithm.

__step3_suffixes – Suffixes to be deleted in step 3 of the algorithm.

__step4_suffixes – Suffixes to be deleted in step 4 of the algorithm.

__step5_suffixes – Suffixes to be deleted in step 5 of the algorithm.

__special_words – A dictionary containing words which have to be stemmed specially. (I can stick my "thee"/"thou" and "shalt" issues here)

现在,愚蠢的问题。如何更改变量?在我到处寻找变量的地方,我不断得到“对象没有属性”...

最佳答案

尝试:

>>> from nltk.stem import snowball
>>> stemmer = snowball.EnglishStemmer()
>>> stemmer.stem('thee')
u'thee'
>>> dir(stemmer)
['_EnglishStemmer__double_consonants', '_EnglishStemmer__li_ending', '_EnglishStemmer__special_words', '_EnglishStemmer__step0_suffixes', '_EnglishStemmer__step1a_suffixes', '_EnglishStemmer__step1b_suffixes', '_EnglishStemmer__step2_suffixes', '_EnglishStemmer__step3_suffixes', '_EnglishStemmer__step4_suffixes', '_EnglishStemmer__step5_suffixes', '_EnglishStemmer__vowels', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', '__weakref__', '_r1r2_standard', '_rv_standard', 'stem', 'stopwords', 'unicode_repr']
>>> stemmer._EnglishStemmer__special_words
{u'exceeds': u'exceed', u'inning': u'inning', u'exceed': u'exceed', u'exceeding': u'exceed', u'succeeds': u'succeed', u'succeeded': u'succeed', u'skis': u'ski', u'gently': u'gentl', u'singly': u'singl', u'cannings': u'canning', u'early': u'earli', u'earring': u'earring', u'bias': u'bias', u'tying': u'tie', u'exceeded': u'exceed', u'news': u'news', u'herring': u'herring', u'proceeds': u'proceed', u'succeeding': u'succeed', u'innings': u'inning', u'proceeded': u'proceed', u'proceed': u'proceed', u'dying': u'die', u'outing': u'outing', u'sky': u'sky', u'andes': u'andes', u'idly': u'idl', u'outings': u'outing', u'ugly': u'ugli', u'only': u'onli', u'proceeding': u'proceed', u'lying': u'lie', u'howe': u'howe', u'atlas': u'atlas', u'earrings': u'earring', u'cosmos': u'cosmos', u'canning': u'canning', u'succeed': u'succeed', u'herrings': u'herring', u'skies': u'sky'}
>>> stemmer._EnglishStemmer__special_words['thee'] = 'thou'
>>> stemmer.stem('thee')
'thou'

和:

>>> stemmer._EnglishStemmer__step0_suffixes
(u"'s'", u"'s", u"'")
>>> stemmer._EnglishStemmer__step1a_suffixes
(u'sses', u'ied', u'ies', u'us', u'ss', u's')
>>> stemmer._EnglishStemmer__step1b_suffixes
(u'eedly', u'ingly', u'edly', u'eed', u'ing', u'ed')
>>> stemmer._EnglishStemmer__step2_suffixes
(u'ization', u'ational', u'fulness', u'ousness', u'iveness', u'tional', u'biliti', u'lessli', u'entli', u'ation', u'alism', u'aliti', u'ousli', u'iviti', u'fulli', u'enci', u'anci', u'abli', u'izer', u'ator', u'alli', u'bli', u'ogi', u'li')
>>> stemmer._EnglishStemmer__step3_suffixes
(u'ational', u'tional', u'alize', u'icate', u'iciti', u'ative', u'ical', u'ness', u'ful')
>>> stemmer._EnglishStemmer__step4_suffixes
(u'ement', u'ance', u'ence', u'able', u'ible', u'ment', u'ant', u'ent', u'ism', u'ate', u'iti', u'ous', u'ive', u'ize', u'ion', u'al', u'er', u'ic')
>>> stemmer._EnglishStemmer__step5_suffixes
(u'e', u'l')

请注意,步骤后缀是元组并且是不可变的,因此您不能像特殊词一样附加或添加它们,您必须“复制”并转换为列表并附加到它,然后覆盖它,例如:

>>> from nltk.stem import snowball
>>> stemmer = snowball.EnglishStemmer()
>>> stemmer._EnglishStemmer__step1b_suffixes
[u'eedly', u'ingly', u'edly', u'eed', u'ing', u'ed', 'eth']
>>> step1b = stemmer._EnglishStemmer__step1b_suffixes
>>> stemmer._EnglishStemmer__step1b_suffixes = list(step1b) + ['eth']
>>> stemmer.stem('loveth')
u'love'

关于python - 如何使用 nltk.stem.snowball 阻止 Shakespere/KJV,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35690892/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com