gpt4 book ai didi

algorithm - Scikit 学习算法表现极差

转载 作者:塔克拉玛干 更新时间:2023-11-03 05:34:05 25 4
gpt4 key购买 nike

我是 scikit learn 的新手,我正在用头撞墙。我同时使用了真实世界和测试数据,scikit 算法在预测任何事情时都没有超出机会水平。我已经尝试过 knn、决策树、svc 和朴素贝叶斯。

基本上,我做了一个测试数据集,由一列 0 和 1 组成,所有 0 的特征值都在 0 到 .5 之间,所有 1 的特征值都在 .5 到 1 之间。这应该是非常简单并给出接近 100% 的准确度。但是,没有一个算法的性能高于机会水平。准确度范围为 45 至 55 %。我已经尝试为每种算法调整一大堆参数,但注意到有帮助。我认为我的实现存在根本性错误。

请帮帮我。这是我的代码:

from sklearn.cross_validation import train_test_split
from sklearn import preprocessing
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score
import sklearn
import pandas
import numpy as np


df=pandas.read_excel('Test.xlsx')



# Make data into np arrays
y = np.array(df[1])
y=y.astype(float)
y=y.reshape(399)

x = np.array(df[2])
x=x.astype(float)
x=x.reshape(399, 1)



# Creating training and test data

labels_train, labels_test = train_test_split(y)
features_train, features_test = train_test_split(x)

#####################################################################
# PERCEPTRON
#####################################################################

from sklearn import linear_model

perceptron=linear_model.Perceptron()

perceptron.fit(features_train, labels_train)

perc_pred=perceptron.predict(features_test)

print sklearn.metrics.accuracy_score(labels_test, perc_pred, normalize=True, sample_weight=None)
print 'perceptron'

#####################################################################
# KNN classifier
#####################################################################
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(features_train, labels_train)


knn_pred = knn.predict(features_test)


# Accuraatheid

print sklearn.metrics.accuracy_score(labels_test, knn_pred, normalize=True, sample_weight=None)
print 'knn'


#####################################################################
## SVC
#####################################################################

from sklearn.svm import SVC
from sklearn import svm
svm2 = SVC(kernel="linear")


svm2 = svm.SVC()
svm2.fit(features_train, labels_train)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=1.0, kernel='linear', max_iter=-1, probability=False,
random_state=None,
shrinking=True, tol=0.001, verbose=False)



svc_pred = svm2.predict(features_test)

print sklearn.metrics.accuracy_score(labels_test, svc_pred, normalize=True,
sample_weight=None)

#####################################################################
# Decision tree
#####################################################################
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(features_train, labels_train)

tree_pred=clf.predict(features_test)

# Accuraatheid

print sklearn.metrics.accuracy_score(labels_test, tree_pred, normalize=True,
sample_weight=None)
print 'tree'

#####################################################################
# Naive bayes
#####################################################################


import sklearn
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(features_train, labels_train)

print "training time:", round(time()-t0, 3), "s"


GaussianNB()
bayes_pred = clf.predict(features_test)



print sklearn.metrics.accuracy_score(labels_test, bayes_pred,
normalize=True, sample_weight=None)

最佳答案

你似乎使用了train_test_split错误的方式。

labels_train, labels_test = train_test_split(y)      #WRONG
features_train, features_test = train_test_split(x) #WRONG

标签和数据的拆分不一定相同。一种手动拆分数据的简单方法:

randomvec=np.random.rand(len(data))  
randomvec=randomvec>0.5

train_data=data[randomvec]
train_label=labels[randomvec]
test_data=data[np.logical_not(randomvec)]
test_label=labels[np.logical_not(randomvec)]

或者正确使用scikit方法:

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.5, random_state=42)

关于algorithm - Scikit 学习算法表现极差,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33317601/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com