gpt4 book ai didi

python - 朴素贝叶斯分类器错误

转载 作者:太空狗 更新时间:2023-10-29 21:53:24 26 4
gpt4 key购买 nike

嘿,我正在尝试使用朴素贝叶斯分类器对一些文本进行分类。我正在使用 NLTK。每当我使用 classify() 方法测试分类器时,它总是为第一项返回正确的分类,并为我分类的所有其他文本行返回相同的分类。以下是我的代码:

from nltk.corpus import movie_reviews
from nltk.tokenize import word_tokenize
import nltk
import random
import nltk.data

documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)

all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = all_words.keys()[:2000]

def bag_of_words(words):
return dict([word,True] for word in words)

def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains(%s)' % word] = (word in document_words)
return features

featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)

text1="i love this city"
text2="i hate this city"


feats1=bag_of_words(word_tokenize(text1))
feats2=bag_of_words(word_tokenize(text2))


print classifier.classify(feats1)
print classifier.classify(feats2)

此代码将打印 pos 两次,就像我翻转代码的最后两行一样,它将打印 neg 两次。谁能帮忙?

最佳答案

改变

features['contains(%s)' % word] = (word in document_words)

features[word] = (word in document)

否则分类器只知道“contains(...)”形式的“词”,因此对“i love this city”

中的词一无所知
import nltk.tokenize as tokenize
import nltk
import random
random.seed(3)

def bag_of_words(words):
return dict([word, True] for word in words)

def document_features(document):
features = {}
for word in word_features:
features[word] = (word in document)
# features['contains(%s)' % word] = (word in document_words)
return features

movie_reviews = nltk.corpus.movie_reviews

documents = [(set(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)

all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = all_words.keys()[:2000]

train_set = [(document_features(d), c) for (d, c) in documents[:200]]

classifier = nltk.NaiveBayesClassifier.train(train_set)

classifier.show_most_informative_features()
for word in ('love', 'hate'):
# No hope in passing the tests if word is not in word_features
assert word in word_features
print('probability {w!r} is positive: {p:.2%}'.format(
w = word, p = classifier.prob_classify({word : True}).prob('pos')))

tests = ["i love this city",
"i hate this city"]

for test in tests:
words = tokenize.word_tokenize(test)
feats = bag_of_words(words)
print('{s} => {c}'.format(s = test, c = classifier.classify(feats)))

产量

Most Informative Features
worst = True neg : pos = 15.5 : 1.0
ridiculous = True neg : pos = 11.5 : 1.0
batman = True neg : pos = 7.6 : 1.0
drive = True neg : pos = 7.6 : 1.0
blame = True neg : pos = 7.6 : 1.0
terrible = True neg : pos = 6.9 : 1.0
rarely = True pos : neg = 6.4 : 1.0
cliches = True neg : pos = 6.0 : 1.0
$ = True pos : neg = 5.9 : 1.0
perfectly = True pos : neg = 5.5 : 1.0
probability 'love' is positive: 61.52%
probability 'hate' is positive: 36.71%
i love this city => pos
i hate this city => neg

关于python - 朴素贝叶斯分类器错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13504424/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com