gpt4 book ai didi

machine-learning - 具有分类值的 KNN 无法正确预测

转载 作者:行者123 更新时间:2023-11-30 08:34:38 25 4
gpt4 key购买 nike

我正在尝试构建一个模型,给定一个项目,预测它属于哪个商店。

我有一个大约 250 条记录的数据集,这些记录应该是不同在线商店中的商品。

每条记录由以下部分组成:类别、子类别、价格、商店标识符(y 变量)

我尝试了几个邻居,尝试了曼哈顿距离,但遗憾的是无法获得更好的结果,准确度~0.55。随机森林产生的准确度约为 0.7。

我的直觉表明模型应该能够预测这个问题。我错过了什么?

这是数据: https://pastebin.com/nUsSbkp4

from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

dataset = pd.read_csv('data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 3].values

labelencoder_X_0 = LabelEncoder()
X[:, 0] = labelencoder_X_0.fit_transform(X[:, 0])

labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])

onehotencoder_0 = OneHotEncoder(categorical_features = [0])
X = onehotencoder_0.fit_transform(X).toarray()

onehotencoder_1 = OneHotEncoder(categorical_features = [1])
X = onehotencoder_1.fit_transform(X).toarray()

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# classifier = RandomForestClassifier(n_estimators=25, criterion='entropy', random_state = 0)
classifier = KNeighborsClassifier(n_neighbors=3, metric='minkowski', p=2)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)

accuracy = classifier.score(X_test, y_test)
print(accuracy)

最佳答案

KNN可以通过分类预测器产生良好的预测。我以前已经成功过。但有一些东西没有注意:

除此之外,您实际上在 one-hot-encoding 中存在一个错误:

调用第一个热编码器后,您将得到一个形状数组 (273, 21):

onehotencoder_0 = OneHotEncoder(categorical_features = [0])
X = onehotencoder_0.fit_transform(X).toarray()
print(X.shape)
print(X[:5,:])

Out:
(275, 21)
[[ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 52. 33.99]
[ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 52. 33.97]
[ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 36. 27.97]
[ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 37. 13.97]
[ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 20. 9.97]]

然后,您在第二列上调用一个热编码,该编码只有两个值(零和一),因此结果是:

onehotencoder_1 = OneHotEncoder(categorical_features = [1])
X = onehotencoder_1.fit_transform(X).toarray()
print(X.shape)
print(X[:5,:])

Out:
(275, 22)
[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 52. 33.99]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 52. 33.97]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 36. 27.97]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 37. 13.97]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 20. 9.97]]

所以,如果您可以解决这个问题,或者只是使用管道来避免这种情况,并添加数字变量的缩放比例,如下所示:

from sklearn.pipeline import Pipeline, FeatureUnion, make_pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.neighbors import KNeighborsClassifier

class Columns(BaseEstimator, TransformerMixin):
def __init__(self, names=None):
self.names = names

def fit(self, X, y=None, **fit_params):
return self

def transform(self, X):
return X.loc[:,self.names]

dataset = pd.read_csv('data.csv', header=None)
dataset.columns = ["cat1", "cat2", "num1", "target"]

X = dataset.iloc[:, :-1]
y = dataset.iloc[:, 3]

labelencoder_X_0 = LabelEncoder()
X.iloc[:, 0] = labelencoder_X_0.fit_transform(X.iloc[:, 0])

labelencoder_X_1 = LabelEncoder()
X.iloc[:, 1] = labelencoder_X_1.fit_transform(X.iloc[:, 1])

numeric = ["num1"]
categorical = ["cat1", "cat2"]

pipe = Pipeline([
("features", FeatureUnion([
('numeric', make_pipeline(Columns(names=numeric),StandardScaler())),
('categorical', make_pipeline(Columns(names=categorical), OneHotEncoder(sparse=False)))
])),
])

X = pipe.fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# classifier = RandomForestClassifier(n_estimators=25, criterion='entropy', random_state = 0)
classifier = KNeighborsClassifier(n_neighbors=3, metric='minkowski', p=2)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)

accuracy = classifier.score(X_test, y_test)
print(accuracy)

Out:
0.7101449275362319

如您所见,这至少可以达到随机福雷斯特的准确度!

所以你接下来可以尝试的是尝试高尔距离。关于将其添加到 sklearn 的讨论正在进行中 here ,所以可以查看 Ipython Notebook 中发布的代码并尝试一下。

关于machine-learning - 具有分类值的 KNN 无法正确预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51938604/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com