gpt4 book ai didi

python-3.x - 测试和训练数据集具有不同数量的特征

转载 作者:行者123 更新时间:2023-11-30 08:38:58 25 4
gpt4 key购买 nike

我正在尝试在一些训练和测试数据上训练 svm 模型。如果我结合测试和训练数据,程序运行良好,但如果我将它们分开并测试模型准确性,它会说

Traceback (most recent call last):
File "/home/PycharmProjects/analysis.py", line 160, in <module>
main()
File "/home/PycharmProjects/analysis.py", line 156, in main
learn_model(tf_idf_train,target,tf_idf_test)
File "/home/PycharmProjects/analysis.py", line 113, in learn_model
predicted = classifier.predict(data_test)
File "/home/.local/lib/python3.4/site-packages/sklearn/svm/base.py", line 573, in predict
y = super(BaseSVC, self).predict(X)
File "/home/.local/lib/python3.4/site-packages/sklearn/svm/base.py", line 310, in predict
X = self._validate_for_predict(X)
File "/home/.local/lib/python3.4/site-packages/sklearn/svm/base.py", line 479, in _validate_for_predict
(n_features, self.shape_fit_[1]))
ValueError: X.shape[1] = 19137 should be equal to 4888, the number of features at training time

这里的测试集比训练集大。所以测试集自然比训练集有更多的特征,所以它的给定值是错误的。

这是我的代码:

def load_train_file():
with open('~1k comments.csv',encoding='ISO-8859-1',) as csv_file:
reader = csv.reader(csv_file,delimiter=",",quotechar='"')
reader.__next__()
data =[]
target = []
for row in reader:
if row[0] and row[1]:
data.append(row[0])
target.append(row[1])

return data,target


def load_file():
with open('comments.csv',encoding='ISO-8859-1',) as csv_file:
reader = csv.reader(csv_file,delimiter=",",quotechar='"')
reader.__next__()
data =[]
target = []
for row in reader:
if row[0] and row[1]:
data.append(row[0])
target.append(row[1])
print(len(data))

return data

# preprocess creates the term frequency matrix for the review data set
def preprocess():
dataTrain,targetTrain = load_train_file()
testData=load_file()
count_vectorizer = CountVectorizer(binary='true')
dataTrain = count_vectorizer.fit_transform(dataTrain)
tfidf_train_data = TfidfTransformer(use_idf=True).fit_transform(dataTrain)

count_vectorizer = CountVectorizer()
testData = count_vectorizer.fit_transform(testData)
tfidf_test_data = TfidfTransformer(use_idf=True).fit_transform(testData)

return tfidf_train_data,tfidf_test_data

def learn_model(data,target,testData):
data_train,data_test,target_train,target_test = cross_validation.train_test_split(data,target,test_size=0.001,random_state=43)
e = np.zeros(testData.shape[0])
data_train1, data_test, target_train1, target_test = cross_validation.train_test_split(testData, e,test_size=.9,random_state=43)
classifier = SVC(gamma=.01, C=100.)
classifier.fit(data_train, target_train)
predicted = classifier.predict(data_test)
for x in range(0,50):
print(testData[x]+str(predicted[x]))

def evaluate_model(target_true,target_predicted):
print (classification_report(target_true,target_predicted))
print ("The accuracy score is {:.2%}".format(accuracy_score(target_true,target_predicted)))

def main():
data,target = load_train_file()
datatest=load_file()


tf_idf_train,tf_idf_test = preprocess()
# print(tf_idf_train.shape())
# print(tf_idf_test.shape())

learn_model(tf_idf_train,target,tf_idf_test)
# learn_model(data,target,datatest)


main()

如何解决这个问题?

最佳答案

训练和测试部分必须使用相同的矢量化器和变压器;此外,矢量化器不应该适合测试数据。所以而不是

count_vectorizer = CountVectorizer(binary='true')
dataTrain = count_vectorizer.fit_transform(dataTrain)
tfidf_train_data = TfidfTransformer(use_idf=True).fit_transform(dataTrain)

count_vectorizer = CountVectorizer()
testData = count_vectorizer.fit_transform(testData)
tfidf_test_data = TfidfTransformer(use_idf=True).fit_transform(testData)

使用这样的东西:

count_vectorizer = CountVectorizer(binary=True)
tfidf_transformer = TfidfTransformer(use_idf=True)
dataTrain = count_vectorizer.fit_transform(dataTrain)
tfidf_train_data = transformer.fit_transform(dataTrain)

testData = count_vectorizer.transform(testData)
tfidf_test_data = tfidf_transformer.transform(testData)

您还可以使用Pipeline让它变得更好:

from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
CountVectorizer(binary=True),
TfidfTransformer(use_idf=True),
)
tfidf_train_data = pipe.fit_transform(dataTrain)
tfidf_test_data = pipe.transform(testData)

甚至使用TfidfVectorizer它将 CountVectorizer 和 TfidfTransformer 组合在单个矢量化器对象中:

from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer(binary=True, use_idf=True)
tfidf_train_data = vec.fit_transform(dataTrain)
tfidf_test_data = vec.transform(testData)

关于python-3.x - 测试和训练数据集具有不同数量的特征,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40731271/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com