gpt4 book ai didi

python - 使用一个完全训练的文件和另一个完全测试的文件进行分类

转载 作者:行者123 更新时间:2023-11-30 09:25:53 24 4
gpt4 key购买 nike

我正在尝试进行分类,其中一个文件完全是训练,另一个文件完全是测试。这是可能的?我尝试过:

import pandas
import numpy as np
import pandas as pd
from sklearn import metrics
from sklearn import cross_validation
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer, TfidfTransformer
from sklearn.metrics import precision_score, recall_score, confusion_matrix, classification_report, accuracy_score, f1_score

#csv file from train
df = pd.read_csv('data_train.csv', sep = ',')

#csv file from test
df_test = pd.read_csv('data_test.csv', sep = ',')

#Randomising the rows in the file
df = df.reindex(np.random.permutation(df.index))
df_test = df_test.reindex(np.random.permutation(df_test.index))

vect = CountVectorizer()

X = vect.fit_transform(df['data_train'])
y = df['label']

X_T = vect.fit_transform(df_test['data_test'])
y_t = df_test['label']

X_train, y_train = train_test_split(X, y, test_size = 0, random_state = 100)
X_test, y_test = train_test_split(X_T, y_t, test_size = 0, random_state = 100)

tf_transformer = TfidfTransformer(use_idf=False).fit(X)
X_train_tf = tf_transformer.transform(X)
X_train_tf.shape

tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X)
X_train_tfidf.shape

tf_transformer = TfidfTransformer(use_idf=False).fit(X_T)
X_train_tf_teste = tf_transformer.transform(X_T)
X_train_tf_teste.shape

tfidf_transformer = TfidfTransformer()
X_train_tfidf_teste = tfidf_transformer.fit_transform(X_T)
X_train_tfidf_teste.shape

#RegLog
clf = LogisticRegression().fit(X_train, y_train)

y_pred = clf.predict(X_test)

print("confusion matrix")
print(confusion_matrix(y_test, y_pred, labels = y))

print("F-score")
print(f1_score(y_test, y_pred, average=None))
print(precision_score(y_test, y_pred, average=None))
print(recall_score(y_test, y_pred, average=None))

print("cross validation")

scores = cross_validation.cross_val_score(clf, X, y, cv = 10)
print(scores)
print("Accuracy: {} +/- {}".format(scores.mean(), scores.std() * 2))

我已将 test_size 设置为零,因为我不想在这些文件中存在分区。我还在训练和测试文件中应用了 Count 和 TFIDF。

我的输出错误:

Traceback (most recent call last):

File "classif.py", line 34, in X_train, y_train = train_test_split(X, y, test_size = 0, random_state = 100)

ValueError: too many values to unpack (expected 2)

最佳答案

@Alexis 清楚地指出并解决了您在 train_test_split 中遇到的错误。我还再次建议不要使用 train_test_split,因为它除了您已经完成的洗牌之外不会执行任何操作。

但我想强调另一个重要的点,即,如果您单独保存训练文件和测试文件,那么就不要单独安装矢量化器。它将为训练和测试文件创建不同的列。示例:

cv = CountVectorizer()
train=['Hi this is stack overflow']
cv.fit(train)
cv.get_feature_names()

输出: ['hi', 'is', 'overflow', 'stack', 'this']

test=['Hi that is not stack overflow']
cv.fit(test)
cv.get_feature_names()

输出: ['hi', 'is', 'not', 'overflow', 'stack', 'that']

因此,单独拟合它们将导致列不匹配。因此,您应该首先合并训练文件和测试文件,然后一起进行 fit_transform 矢量化器,或者如果您事先没有测试数据,则只能使用适合训练数据的矢量化器来转换测试数据,这将忽略训练数据中不存在的单词.

关于python - 使用一个完全训练的文件和另一个完全测试的文件进行分类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51949736/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com