gpt4 book ai didi

python - ScikitLearn ML 模型的 cv_results.mean() =0 且 cv_results.std() = 0

转载 作者:行者123 更新时间:2023-11-30 09:32:18 24 4
gpt4 key购买 nike

我有一个来 self 的蜂窝数据使用情况的数据集 ( https://github.com/ivonnics/Machine-Learning/blob/master/CJD2.csv ),其中指示:日期、时间和流量。从“日期”功能中,我区分了一周中的不同日子(周一 - 周日),从“时间”功能中,我考虑了四 (4) 个不同的时间范围(午夜、早上、下午和晚上)。通过这 11 个"new"功能,我试图找到工作日、时间范围和使用的数据量之间的关系。我修改了 Jason Brownlee (@TeachTheMachine) 程序(您可以从我的 github https://github.com/ivonnics/Machine-Learning/blob/master/Data%20Analytical%20Github.py 下载修改后的版本),并得到使用的所有不同模型的结果:均值和标准差等于零 (0)。我不知道明白为什么...有什么帮助或建议吗?程序:

# -*- coding: utf-8 -*-
"""
Created on Sat Nov 10 15:18:54 2018
@author: ivonnics
"""

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn import preprocessing
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from pandas.plotting import scatter_matrix

url = "https://github.com/ivonnics/Machine-Learning/blob/master/CJD2.csv"
dataset = pd.read_html(url)
Tabla=dataset[0]
dataset=Tabla[['Date', 'Time', 'Volume']]

dataset1=[pd.to_datetime(hour, format="%I:%M:%S %p", errors="coerce") for hour in dataset['Time']]

print('-----------------------------------------------------------')
#print('TESTANDO')
dataset2=pd.Series(dataset1).dt.hour
#print(dataset2)
dataset3={'Hour': dataset2}
#print(dataset3)
dataset4=pd.DataFrame(dataset3, columns = ['Hour'])
#print(dataset4.head(20))

print(dataset.head(20))
print('-----------------------------------------------------------')
print(dataset.shape)
print('-----------------------------------------------------------')
print(dataset.describe())
print('-----------------------------------------------------------')


print(dataset.nunique())
print('-----------------------------------------------------------')

print('-----------------------------------------------------------')


df_new1= pd.concat([dataset, dataset4], axis=1)

print('-----------------------------------------------------------')
print(df_new1[(df_new1['Hour'] == 5)])
print('-----------------------------------------------------------')

dataset5=[pd.to_datetime(weekday, format="%m/%d/%Y", errors="coerce") for weekday in dataset['Date']]


dataset6=pd.Series(dataset5).dt.weekday_name
dataset7={'Weekday': dataset6}

dataset8=pd.DataFrame(dataset7, columns = ['Weekday'])

df_new2= pd.concat([df_new1, dataset8], axis=1)

df_new2['Madrugada'] = np.where((df_new2['Hour']>=0) & (df_new2['Hour']<6), 1, 0)
df_new2['Mañana'] = np.where((df_new2['Hour']>=6) & (df_new2['Hour']<12), 1, 0)
df_new2['Tarde'] = np.where((df_new2['Hour']>=12) & (df_new2['Hour']<18), 1, 0)
df_new2['Noche'] = np.where((df_new2['Hour']>=18) & (df_new2['Hour']<24), 1, 0)
df_new2['Lunes'] = np.where((df_new2['Weekday']=='Monday'), 1, 0)
df_new2['Martes'] = np.where((df_new2['Weekday']=='Tuesday'), 1, 0)
df_new2['Miércoles'] = np.where((df_new2['Weekday']=='Wednesday'), 1, 0)
df_new2['Jueves'] = np.where((df_new2['Weekday']=='Thursday'), 1, 0)
df_new2['Viernes'] = np.where((df_new2['Weekday']=='Friday'), 1, 0)
df_new2['Sábado'] = np.where((df_new2['Weekday']=='Saturday'), 1, 0)
df_new2['Domingo'] = np.where((df_new2['Weekday']=='Sunday'), 1, 0)


print(df_new2.shape)
print(df_new2.head(20))


df_new3=df_new2[['Lunes', 'Martes', 'Miércoles', 'Jueves', 'Viernes', 'Sábado', 'Domingo', 'Madrugada', 'Mañana', 'Tarde', 'Noche', 'Volume']]

#Analysis
print(df_new3.shape)
print(df_new3.head(20))
print(dataset.describe())
print(df_new2.groupby('Weekday').size())
print(df_new3.groupby('Madrugada').size())
print(df_new3.groupby('Mañana').size())
print(df_new3.groupby('Tarde').size())
print(df_new3.groupby('Noche').size())
print(df_new3.groupby('Volume').size())
# box and whisker plots
df_new3.plot(kind='box', subplots=True, layout=(4,3), sharex=False, sharey=False)
plt.show()
# histograms
df_new3.hist()
plt.show()
# scatter plot matrix
scatter_matrix(df_new3)
plt.show()

# Split-out validation dataset
array = df_new3.values
X = array[:,0:11]
#print(X)
Y = array[:,11]


#print(Y)
lab_enc = preprocessing.LabelEncoder()
encoded = lab_enc.fit_transform(Y)
Y=encoded
#print(Y)
print('')

validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)


num_folds = 10
num_instances = len(X_train)
seed = 7
scoring = 'accuracy'


models = []

models.append(('LR', LogisticRegression())) #FUNCIONA!!!
models.append(('KNN', KNeighborsClassifier())) #FUNCIONA!!!
models.append(('CART', DecisionTreeClassifier())) #FUNCIONA!!!
models.append(('NB', GaussianNB())) # FUNCIONA!!!
models.append(('SVM', SVC())) #FUNCIONA!!!
# evaluate each model in turn
results = []
names = []

for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)


# Compare Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()

这是我在模型评估后得到的结果:

LR: 0.000000 (0.000000)
KNN: 0.000000 (0.000000)
CART: 0.000000 (0.000000)
NB: 0.000000 (0.000000)
SVM: 0.000000 (0.000000)

提前感谢您的帮助...何塞

最佳答案

您的情况存在问题,您有 621 个样本,其中有 593 个唯一标签。这就是为什么确定性估计器在 Kfold 之后无法找到任何验证样本的任何学习值(实际上,您可以使用 StratifiedKfoldnfold=2 通过 KNN 和 CART,但现在并不重要)。

print(len(Y))
print(len(np.unique(Y)))

输出:

621
593

测试它并制作一个技巧,实际上是某种有趣的增强,用于在train_test_split之前进行测试:

X = 5 * list(X)
Y = 5 * list(Y)

你的结果马上就会好得多:

LR: 0.015700 (0.000403)
KNN: 0.028583 (0.000403)
CART: 0.018519 (0.001610)
NB: 0.018519 (0.001610)
SVM: 0.010870 (0.000403)
因此,在您的原始情况下,在每个验证步骤中,估计器将面对一个样本,并为其估计标签概率(或标签),但会得到一个与学习的标签不同的验证(测试)标签( s)。结果,它将返回 0.00 准确度

为了更好地理解我们有

0100000000 256
0100000000 675
0100000000 912

在您的训练集中,将其分为训练测试集之后。估算器将学习它。由于唯一标签数量相对较多,验证集将包含以下内容:

0100000000 112
0100000000 745
0100000000 312

然后它尝试估计正确的标签,其值是:

0100000000

这将是这样的,具体取决于估计器及其选项:

{256: 0.333, 675: 0.333, 912: 0.333}

因此,验证(测试)准确性:

0100000000 112 at this label: 0.00
0100000000 745 at this label: 0.00
0100000000 312 at this label: 0.00

我希望现在每个人都清楚了。

关于python - ScikitLearn ML 模型的 cv_results.mean() =0 且 cv_results.std() = 0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53381853/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com