gpt4 book ai didi

python-3.x - 训练/分割数据后在 X_train 和 X_test 中获取 NaN

转载 作者:行者123 更新时间:2023-11-30 08:42:15 25 4
gpt4 key购买 nike

世界各地的程序员们大家好。我在将数据输入机器学习模型时遇到问题。

我尝试使用 pandas 将 CSV 文件读入 python,然后将其拆分为训练数据和测试数据。之后,我使用 StandardScaler 缩放结果,当我到达喂食部分时,由于某种原因,我的训练数据中有 NaN。 PS:很确定这是因为我丢失了数据,而是因为我有无限的数据

这就是我所拥有的代码......

# Importing and organizing required packages and libraries
import pandas as pd;
import numpy as np;
from sklearn.model_selection import train_test_split;
from sklearn.metrics import confusion_matrix, classification_report;
from sklearn.preprocessing import StandardScaler;
from sklearn.ensemble import RandomForestClassifier;
from sklearn.neural_network import MLPClassifier;

#Reading in all of the excel files created from preprocessing.py
dataframe2 = pd.read_csv('dataframe2.csv');
dataframe3 = pd.read_csv('dataframe3.csv');
dataframe4 = pd.read_csv('dataframe4.csv');
dataframe5 = pd.read_csv('dataframe5.csv');

#Function used for creating class labels
def labelCreation(dataframe):
labels = [];
index = dataframe['LoC'].index.values;
for i in range(len(index)):
if str(dataframe.iloc[i]['Unnamed: 0']) == str(dataframe.iloc[i]['Replacing_line_number']):
labels.append('1');
else:
labels.append('0');
return labels;

#Picking features for training
def features(dataframe):
X = dataframe[['Similar_Chars','Similar_Tokens','Edit_Distance','LoC_SemiColon','Replacement_Line_SemiColon','LoC_Open_Bracket_Char',
'Replacement_Line_Open_Bracket_Char','LoC_Close_Bracket_Char','Replacement_Line_Close_Bracket_Char']];
return X;

#Training and splitting the data
X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe2), labelCreation(dataframe = dataframe2), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe3), labelCreation(dataframe = dataframe3), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe4), labelCreation(dataframe = dataframe4), test_size=0.2);
#X_train, X_test, Y_train, Y_test = train_test_split(features(dataframe = dataframe5), labelCreation(dataframe = dataframe5), test_size=0.2);

#Scalling is added in order to get the optimized result
sc = StandardScaler();
X_train = sc.fit_transform(X_train);
X_test = sc.transform(X_test);

#Feeding the data into a random forest classifier model
rfc = RandomForestClassifier(n_estimators = 200);
rfc.fit(X_train, Y_train);
pred_rfc = rfc.predict(X_test);

#Let's see how well the model performed
print(classification_report(Y_test, pred_rfc));
print(confusion_matrix(Y_test, pred_rfc));

#Feeding the data into a neural network model
mlpc=MLPClassifier(hidden_layer_sizes=(11,11,11), max_iter=500);
mlpc.fit(X_train, Y_train);
pred_mlpc = mlpc.predict(X_test);

#Let's see how well the model performed
print(classification_report(Y_test, pred_mlpc));
print(confusion_matrix(Y_test, pred_mlpc));

当我运行上面的所有代码,然后输入 X_train[:10] 时,我得到了这个

array([[-0.49869515, -0.39609005, -1.2919533 , -0.96747226,  0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.49869515, -0.39609005, 1.22954406, 1.03362137, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -0.67191297, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[ 0.09153914, -0.39609005, -0.75458501, 1.03362137, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[-0.49869515, -0.39609005, -0.50656888, -0.96747226, 0.74307391,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[-0.49869515, -0.39609005, -0.79592103, -0.96747226, 0.74307391,
1.02449721, -1.67379807, 1.06693051, -1.72395057],
[ 0.68177344, 2.20020466, 0.48549566, -0.96747226, -1.34576115,
1.02449721, -1.67379807, 1.06693051, -1.72395057],
[-0.20357801, -0.39609005, -0.58924092, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304]])

此外,当我运行X_test[:10]时,我得到了类似的结果

array([[ 3.04271061,  1.33477309, -2.11867374,  1.03362137,  0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.49869515, 0.46934152, -0.13454468, -0.96747226, -1.34576115,
1.02449721, 0.59744363, -0.93726817, 0.58006304],
[ 0.09153914, -0.39609005, -0.75458501, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.20357801, -0.39609005, 1.43622417, 1.03362137, -1.34576115,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -1.45729739, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ 1.27200773, 2.20020466, -0.25855274, 1.03362137, 0.74307391,
1.02449721, 0.59744363, 1.06693051, 0.58006304],
[-0.20357801, -0.39609005, -1.12660921, 1.03362137, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304],
[ nan, nan, nan, nan, nan,
nan, nan, nan, nan],
[-0.49869515, -0.39609005, -0.96126512, -0.96747226, -1.34576115,
-0.97608856, 0.59744363, -0.93726817, 0.58006304]])

重点是,我不知道为什么这些 NaN 会在那里,除了我猜测我可能有无限的值,因为我确保我没有任何缺失的值。

希望这能为我的问题提供足够的背景。如果有人可以伸出援手,我们将不胜感激。

最佳答案

从 csv 文件读取数据帧后,我自己也遇到了类似的问题。我的问题是写入 csv 文件的信息包含 NaN,这也导致了同样的问题。您可以选择的一种选择是在 csv 文件中搜索 NaN,看看这是否是您的问题所在。无论如何,如果您仍然希望将数据无错误地通过神经网络传递,您可以将它们从数据集中删除。我使用 numpy 加载了我的:

dataset =  np.loadtxt("./CSV Files/Dataset.csv", delimiter=",")
dataset = dataset[~np.any(np.isnan(dataset), axis=1)]

第二行搜索原始数组中的元素列表并将其连接起来以删除任何包含 NaN 的元素,这样数据就可以通过神经网络输入。我的数据集是一个二维数组,因此如果它包含 NaN 元素,它将删除整个数组元素。需要提醒的是,如果您在单独的文件中有基本事实,并且它们与 NaN 元素相关联,那么您也需要删除它们。您所要做的就是从数据集中获取索引并删除基本事实列表中这些索引处的元素:

nanIndex = np.argwhere(np.isnan(dataset))
nanIndex = np.delete(nanIndex, 1, 1)
nanIndex = np.unique(nanIndex)
truthValues = np.delete(truthValues, nanIndex)

其中truthValues是你的二维标签列表(同样这是针对二维列表问题,如果它只是一维则不同)。此代码的作用是创建数据集中存在 NaN 的位置的二维数组。我只是将其连接到 x 值或唯一行。nanIndex 最初的位置就是一个例子:(第 1 行)

 [[153   0]
[153 1]
[153 2]
[154 0]
[154 1]]

并转换为:(第 2 行)

[[153]
[153]
[153]
[154]
[154]]

最终变成:(第3行)

[[153]
[154]]

然后将这些位置从第 4 行的地面实况数组中删除。

我希望这可以帮助你解决你的问题,我知道它并没有给你一个明确的答案,为什么你的数据框中有NaN,但它可以帮助你避免无法通过它的问题你的神经网络。这可能不是消除二维数组中 NaN 的最有效方法,但它有效,所以如果有人有更好的方法,请随时通知我!

关于python-3.x - 训练/分割数据后在 X_train 和 X_test 中获取 NaN,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56827802/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com