gpt4 book ai didi

python - CountVectorizer 忽略 'I'

转载 作者:太空狗 更新时间:2023-10-29 20:18:23 24 4
gpt4 key购买 nike

为什么 sklearn 中的 CountVectorizer 会忽略代词“我”?

ngram_vectorizer = CountVectorizer(analyzer = "word", ngram_range = (2,2), min_df = 1)
ngram_vectorizer.fit_transform(['HE GAVE IT TO I'])
<1x3 sparse matrix of type '<class 'numpy.int64'>'
ngram_vectorizer.get_feature_names()
['gave it', 'he gave', 'it to']

最佳答案

默认分词器只考虑 2 个字符(或更多)的单词。

您可以通过将适当的 token_pattern 传递给您的 CountVectorizer 来更改此行为。

默认模式是(参见 the signature in the docs):

'token_pattern': u'(?u)\\b\\w\\w+\\b'

您可以通过更改默认值来获得不丢弃单字母单词的 CountVectorizer,例如:

from sklearn.feature_extraction.text import CountVectorizer
ngram_vectorizer = CountVectorizer(analyzer="word", ngram_range=(2,2),
token_pattern=u"(?u)\\b\\w+\\b",min_df=1)
ngram_vectorizer.fit_transform(['HE GAVE IT TO I'])
print(ngram_vectorizer.get_feature_names())

给出:

['gave it', 'he gave', 'it to', 'to i']

关于python - CountVectorizer 忽略 'I',我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33260505/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com