gpt4 book ai didi

python-3.x - 如何使用 sklearn 中经过训练的 NB 分类器来预测电子邮件的标签?

转载 作者:行者123 更新时间:2023-11-30 09:54:30 25 4
gpt4 key购买 nike

我在电子邮件(垃圾邮件/非垃圾邮件)数据集上创建了高斯朴素贝叶斯分类器,并能够成功运行它。我对数据进行矢量化,将其分为训练集和测试集,然后计算准确性以及 sklearn-Gaussian 朴素贝叶斯分类器中存在的所有特征。

现在我希望能够使用这个分类器来预测新电子邮件的“标签” - 无论它们是否是垃圾邮件。例如说我有一封电子邮件。我想将其提供给我的分类器并预测它是否是垃圾邮件。我怎样才能实现这个目标?请帮忙。

分类器文件的代码。

#!/usr/bin/python

import sys
from time import time
import logging

# Display progress logs on stdout
logging.basicConfig(level = logging.DEBUG, format = '%(asctime)s %(message)s')

sys.path.append("../DatasetProcessing/")
from vectorize_split_dataset import preprocess

### features_train and features_test are the features
for the training and testing datasets, respectively### labels_train and labels_test are the corresponding item labels
features_train, features_test, labels_train, labels_test = preprocess()

#########################################################
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
t0 = time()
clf.fit(features_train, labels_train)
pred = clf.predict(features_test)
print("training time:", round(time() - t0, 3), "s")
print(clf.score(features_test, labels_test))

## Printing Metrics
for Training and Testing
print("No. of Testing Features:" + str(len(features_test)))
print("No. of Testing Features Label:" + str(len(labels_test)))
print("No. of Training Features:" + str(len(features_train)))
print("No. of Training Features Label:" + str(len(labels_train)))
print("No. of Predicted Features:" + str(len(pred)))

## Calculating Classifier Performance
from sklearn.metrics import classification_report
y_true = labels_test
y_pred = pred
labels = ['0', '1']
target_names = ['class 0', 'class 1']
print(classification_report(y_true, y_pred, target_names = target_names, labels = labels))

# How to predict label of a new text
new_text = "You won a lottery at UK lottery commission. Reply to claim it"

矢量化代码

#!/usr/bin/python

import os
import pickle
import numpy
numpy.random.seed(42)

path = os.path.dirname(os.path.abspath(__file__))

### The words(features) and label_data(labels), already largely processed.###These files should have been created beforehand
feature_data_file = path + "./createdDataset/dataSet.pkl"
label_data_file = path + "./createdDataset/dataLabel.pkl"

feature_data = pickle.load(open(feature_data_file, "rb"))
label_data = pickle.load(open(label_data_file, "rb"))

### test_size is the percentage of events assigned to the test set(the### remainder go into training)### feature matrices changed to dense representations
for compatibility with### classifier functions in versions 0.15.2 and earlier
from sklearn import cross_validation
features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(feature_data, label_data, test_size = 0.1, random_state = 42)

from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(sublinear_tf = True, max_df = 0.5, stop_words = 'english')
features_train = vectorizer.fit_transform(features_train)
features_test = vectorizer.transform(features_test)#.toarray()

## feature selection to reduce dimensionality
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile = 5)
selector.fit(features_train, labels_train)
features_train_transformed_reduced = selector.transform(features_train).toarray()
features_test_transformed_reduced = selector.transform(features_test).toarray()

features_train = features_train_transformed_reduced
features_test = features_test_transformed_reduced

def preprocess():
return features_train, features_test, labels_train, labels_test

数据集生成代码

#!/usr/bin/python

import os
import pickle
import re
import sys

# sys.path.append("../tools/")


""
"
Starter code to process the texts of accuate and inaccurate category to extract
the features and get the documents ready for classification.

The list of all the texts from accurate category are in the accurate_files list
likewise for texts of inaccurate category are in (inaccurate_files)

The data is stored in lists and packed away in pickle files at the end.
"
""


accurate_files = open("./rawDatasetLocation/accurateFiles.txt", "r")
inaccurate_files = open("./rawDatasetLocation/inaccurateFiles.txt", "r")

label_data = []
feature_data = []

### temp_counter is a way to speed up the development--there are### thousands of lines of accurate and inaccurate text, so running over all of them### can take a long time### temp_counter helps you only look at the first 200 lines in the list so you### can iterate your modifications quicker
temp_counter = 0


for name, from_text in [("accurate", accurate_files), ("inaccurate", inaccurate_files)]:
for path in from_text: ###only look at first 200 texts when developing### once everything is working, remove this line to run over full dataset
temp_counter = 1
if temp_counter < 200:
path = os.path.join('..', path[: -1])
print(path)
text = open(path, "r")
line = text.readline()
while line: ###use a
function parseOutText to extract the text from the opened text# stem_text = parseOutText(text)
stem_text = text.readline().strip()
print(stem_text)### use str.replace() to remove any instances of the words# stem_text = stem_text.replace("germani", "")### append the text to feature_data
feature_data.append(stem_text)### append a 0 to label_data
if text is from Sara, and 1
if text is from Chris
if (name == "accurate"):
label_data.append("0")
elif(name == "inaccurate"):
label_data.append("1")

line = text.readline()

text.close()

print("texts processed")
accurate_files.close()
inaccurate_files.close()

pickle.dump(feature_data, open("./createdDataset/dataSet.pkl", "wb"))
pickle.dump(label_data, open("./createdDataset/dataLabel.pkl", "wb"))

我还想知道我是否可以逐步训练分类器,从而用更新的数据重新训练创建的模型,以便随着时间的推移改进模型?

如果有人能帮助我解决这个问题,我会非常高兴。我现在真的陷入困境了。

最佳答案

您已经在使用模型来预测测试集中的电子邮件标签。这就是 pred = clf.predict(features_test) 的作用。如果您想查看这些标签,请执行print pred

但是也许您知道如何预测您将来发现的且当前不在测试集中的电子邮件标签?如果是这样,您可以将新电子邮件视为新的测试集。与之前的测试集一样,您需要对数据运行几个关键处理步骤:

1) 您需要做的第一件事是为新电子邮件数据生成特征。特征生成步骤未包含在上面的代码中,但需要进行。

2) 您正在使用 Tfidf 向量化器,它根据词频和逆文档频率将文档集合转换为 Tfidf 特征矩阵。您需要将新的电子邮件测试特征数据通过适合训练数据的矢量化器。

3) 然后,您的新电子邮件测试特征数据将需要使用与训练数据相同的选择器进行降维。

4) 最后,对新测试数据运行预测。如果您想查看新标签,请使用 print pred

为了回答关于迭代重新训练模型的最后一个问题,是的,您绝对可以这样做。只需选择一个频率,生成一个脚本,用传入数据扩展数据集,然后重新运行所有步骤,从预处理到 Tfidf 矢量化,再到降维、拟合和预测。

关于python-3.x - 如何使用 sklearn 中经过训练的 NB 分类器来预测电子邮件的标签?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37068786/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com