gpt4 book ai didi

python - scikit学习逻辑回归模型tfidfvectorizer

转载 作者:行者123 更新时间:2023-11-30 09:17:18 25 4
gpt4 key购买 nike

我正在尝试使用 scikit learn 创建一个逻辑回归模型,代码如下。我使用 9 列表示特征 (X),一列表示标签 (Y)。当尝试拟合时,我收到错误“ValueError:发现样本数量不一致的输入变量:[9, 560000]”,即使以前 X 和 Y 的长度相同,如果我使用 x.transpose() 我会得到一个不同的错误“AttributeError:'int'对象没有属性'lower'”。我假设这可能与 tfidfvectorizer 有关,我这样做是因为其中 3 列包含单个单词并且不起作用。这是执行此操作的正确方法吗?还是我应该单独转换列中的单词,然后使用 train_test_split?如果不是,为什么我会收到错误,我该如何解决它们。这是 csv 的示例.

df = pd.read_csv("UNSW-NB15_1.csv",header=None, names=cols, encoding = "UTF-8",low_memory=False) 

df.to_csv('netraf.csv')
csv = 'netraf.csv'
my_df = pd.read_csv(csv)

x_features = my_df.columns[1:10]
x_data = my_df[x_features]
Y = my_df["Label"]

x_train, x_validation, y_train, y_validation =
model_selection.train_test_split(x_data, Y, test_size=0.2, random_state=7)

tfidf_vectorizer = TfidfVectorizer()
lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])

tfidf_lr_pipe.fit(x_train, y_train)

最佳答案

您尝试做的事情很不寻常,因为 TfidfVectorizer 旨在从文本中提取数字特征。但是,如果您并不真正关心并且只想让代码正常工作,一种方法是将数字数据转换为字符串并配置 TfidfVectorizer 以接受标记化数据:

import pandas as pd
from sklearn import model_selection
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline

cols = ['srcip','sport','dstip','dsport','proto','service','smeansz','dmeansz','attack_cat','Label']
df = pd.read_csv("UNSW-NB15_1.csv",header=None, names=cols, encoding = "UTF-8",low_memory=False)

df.to_csv('netraf.csv')
csv = 'netraf.csv'
my_df = pd.read_csv(csv)

# convert all columns to string like we don't care
for col in my_df.columns:
my_df[col] = my_df[col].astype(str)

# replace nan with empty string like we don't care
for col in my_df.columns[my_df.isna().any()].tolist():
my_df.loc[:, col].fillna('', inplace=True)

x_features = my_df.columns[1:10]
x_data = my_df[x_features]
Y = my_df["Label"]

x_train, x_validation, y_train, y_validation = model_selection.train_test_split(
x_data.values, Y.values, test_size=0.2, random_state=7)

# configure TfidfVectorizer to accept tokenized data
# reference http://www.davidsbatista.net/blog/2018/02/28/TfidfVectorizer/
tfidf_vectorizer = TfidfVectorizer(
analyzer='word',
tokenizer=lambda x: x,
preprocessor=lambda x: x,
token_pattern=None)

lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])
tfidf_lr_pipe.fit(x_train, y_train)

话虽如此,我建议您使用另一种方法对数据集进行特征工程。例如,您可以尝试 to encode your nominal data (例如IP、端口)转换为数值。

关于python - scikit学习逻辑回归模型tfidfvectorizer,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52125784/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com