gpt4 book ai didi

python - NLTK WordNetLemmatizer : Not Lemmatizing as Expected

转载 作者:行者123 更新时间:2023-11-30 22:12:47 24 4
gpt4 key购买 nike

我正在尝试使用 NLTK 的 WordNetLemmatizer 对句子中的所有单词进行词形还原。我有很多句子,但我只是使用第一句话来确保我正确执行此操作。这是我所拥有的:

train_sentences[0]

"Explanation Why edits made username Hardcore Metallica Fan reverted? They vandalisms, closure GAs I voted New York Dolls FAC. And please remove template talk page since I'm retired now.89.205.38.27"

所以现在我尝试按如下方式对每个单词进行词形还原:

lemmatizer = WordNetLemmatizer()
new_sent = [lemmatizer.lemmatize(word) for word in train_sentences[0].split()]
print(new_sent)

我回来了:

['Explanation', 'Why', 'edits', 'made', 'username', 'Hardcore', 'Metallica', 'Fan', 'reverted?', 'They', 'vandalisms,', 'closure', 'GAs', 'I', 'voted', 'New', 'York', 'Dolls', 'FAC.', 'And', 'please', 'remove', 'template', 'talk', 'page', 'since', "I'm", 'retired', 'now.89.205.38.27']

几个问题:

1) 为什么“edits”没有转化为“edit”?诚然,如果我执行lemmatizer.lemmatize("edits"),我会得到edits,但会感到惊讶。

2)为什么“破坏行为”没有转化为“破坏行为”?这非常令人惊讶,因为如果我执行 lemmatizer.lemmatize("vandalisms"),我就会得到 vandalism...

任何澄清/指导都会很棒!

最佳答案

TL;DR

首先标记句子,然后使用词性标记作为词形还原的附加参数输入。

from nltk import pos_tag
from nltk.stem import WordNetLemmatizer

wnl = WordNetLemmatizer()

def penn2morphy(penntag):
""" Converts Penn Treebank tags to WordNet. """
morphy_tag = {'NN':'n', 'JJ':'a',
'VB':'v', 'RB':'r'}
try:
return morphy_tag[penntag[:2]]
except:
return 'n'

def lemmatize_sent(text):
# Text input is string, returns lowercased strings.
return [wnl.lemmatize(word.lower(), pos=penn2morphy(tag))
for word, tag in pos_tag(word_tokenize(text))]

lemmatize_sent('He is walking to school')

有关如何以及为何需要 POS 标签的详细演练,请参阅 https://www.kaggle.com/alvations/basic-nlp-with-nltk

<小时/>

或者,您可以使用 pywsd tokenizer + lemmatizer,它是 NLTK 的 WordNetLemmatizer 的包装器:

安装:

pip install -U nltk
python -m nltk.downloader popular
pip install -U pywsd

代码:

>>> from pywsd.utils import lemmatize_sentence
Warming up PyWSD (takes ~10 secs)... took 9.307677984237671 secs.

>>> text = "Mary leaves the room"
>>> lemmatize_sentence(text)
['mary', 'leave', 'the', 'room']

>>> text = 'Dew drops fall from the leaves'
>>> lemmatize_sentence(text)
['dew', 'drop', 'fall', 'from', 'the', 'leaf']
<小时/>

(版主请注意:我无法将此问题标记为 nltk: How to lemmatize taking surrounding words into context? 的重复问题,因为那里不接受答案,但它是重复的)。

关于python - NLTK WordNetLemmatizer : Not Lemmatizing as Expected,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50992974/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com