>> nltk.word_tokenize('-6ren">
gpt4 book ai didi

python - 双引号的 NLTK 单词标记化行为令人困惑

转载 作者:太空狗 更新时间:2023-10-29 18:03:44 26 4
gpt4 key购买 nike

import nltk
>>> nltk.__version__
'3.0.4'
>>> nltk.word_tokenize('"')
['``']
>>> nltk.word_tokenize('""')
['``', '``']
>>> nltk.word_tokenize('"A"')
['``', 'A', "''"]

看看它如何将 " 更改为双 `` 和 ''

这里发生了什么?为什么要改变性格?有解决办法吗?因为稍后我需要搜索字符串中的每个标记。

Python 2.7.6 是否有任何不同。

最佳答案

长话短说:

nltk.word_tokenize"-> `` 更改开始双引号,从 "-> '' 更改结束双引号。


做多:

首先,nltk.word_tokenize 根据 Penn TreeBank 的标记化方式进行标记化,它来自 nltk.tokenize.treebank,参见 https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py#L91https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L23

class TreebankWordTokenizer(TokenizerI):
"""
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank.
This is the method that is invoked by ``word_tokenize()``. It assumes that the
text has already been segmented into sentences, e.g. using ``sent_tokenize()``.

然后是 https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L48 上的收缩的正则表达式替换列表。 ,它来自“Robert MacIntyre's tokenizer”,即 https://www.cis.upenn.edu/~treebank/tokenizer.sed

缩略词可拆分“gonna”、“wanna”等词:

>>> from nltk import word_tokenize
>>> word_tokenize("I wanna go home")
['I', 'wan', 'na', 'go', 'home']
>>> word_tokenize("I gonna go home")
['I', 'gon', 'na', 'go', 'home']

之后我们到达您询问的标点符号部分,请参阅 https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L63 :

def tokenize(self, text):
#starting quotes
text = re.sub(r'^\"', r'``', text)
text = re.sub(r'(``)', r' \1 ', text)
text = re.sub(r'([ (\[{<])"', r'\1 `` ', text)

啊哈,起始引号从 "->`` 更改:

>>> import re
>>> text = '"A"'
>>> re.sub(r'^\"', r'``', text)
'``A"'
KeyboardInterrupt
>>> re.sub(r'(``)', r' \1 ', re.sub(r'^\"', r'``', text))
' `` A"'
>>> re.sub(r'([ (\[{<])"', r'\1 `` ', re.sub(r'(``)', r' \1 ', re.sub(r'^\"', r'``', text)))
' `` A"'
>>> text_after_startquote_changes = re.sub(r'([ (\[{<])"', r'\1 `` ', re.sub(r'(``)', r' \1 ', re.sub(r'^\"', r'``', text)))
>>> text_after_startquote_changes
' `` A"'

然后我们看到https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L85处理结束引号:

    #ending quotes
text = re.sub(r'"', " '' ", text)
text = re.sub(r'(\S)(\'\')', r'\1 \2 ', text)

应用正则表达式:

>>> re.sub(r'"', " '' ", text_after_startquote_changes)
" `` A '' "
>>> re.sub(r'(\S)(\'\')', r'\1 \2 ', re.sub(r'"', " '' ", text_after_startquote_changes))
" `` A '' "

因此,如果您想在 nltk.word_tokenize 之后搜索双引号的标记列表,只需搜索 ``''而不是 "

关于python - 双引号的 NLTK 单词标记化行为令人困惑,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32185072/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com