gpt4 book ai didi

apache-spark - 在 pyspark.ml 中使用 RandomForestClassifier 时,maxCategories 在 VectorIndexer 中无法按预期工作

转载 作者:行者123 更新时间:2023-11-30 08:28:31 26 4
gpt4 key购买 nike

背景:我正在使用 pyspark.ml 中的 RandomForestClassifier 进行简单的二元分类。在将数据提供给训练之前,我设法使用 VectorIndexer 通过提供参数 maxCategories 来确定特征是数字特征还是分类特征。

问题:即使我使用了 maxCategories 设置为 30 的 VectorIndexer,我在训练管道期间仍然收到错误:

An error occurred while calling o15371.fit.
: java.lang.IllegalArgumentException: requirement failed: DecisionTree requires maxBins (= 32) to be at least as large as the number of values in each categorical feature, but categorical feature 0 has 10765 values. Considering remove this and other categorical features with a large number of values, or add more training examples.

我的代码很简单,col_idx是我生成的列字符串列表,它将传递给stringindexer,col_all是列字符串列表,它将传递给stringindexer和onehotencoder,col_num是数字列名称。

from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, IndexToString, VectorIndexer
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier

my_data.cache()

# stringindexers and encoders
stIndexers = [StringIndexer(inputCol = Col, outputCol = Col + 'Index').setHandleInvalid('keep') for Col in col_idx]
encoder = OneHotEncoderEstimator(inputCols = [Col + 'Index' for Col in col_all], outputCols = [Col + 'ClassVec' for Col in col_all]).setHandleInvalid('keep')

# vector assemblor
col_into_assembler = [cols + 'Index' for cols in col_idx] + [cols + 'ClassVec' for cols in col_all] + col_num
assembler = VectorAssembler(inputCols = col_into_assembler, outputCol = "features")

# featureIndexer, labelIndexer, rf classifier and labelConverter
featureIndexer = VectorIndexer(inputCol = "features", outputCol = "indexedFeatures", maxCategories = 30)
# columns smaller than maxCategories => categorical features, columns larger than maxCategories => numerical / continuous features, smaller value => less categorical features, larger value => more categorical features.
labelIndexer = StringIndexer(inputCol = "label", outputCol = "indexedLabel").fit(my_data)
rf = RandomForestClassifier(featuresCol = "indexedFeatures", labelCol = "indexedLabel")
labelConverter = IndexToString(inputCol = "prediction", outputCol = "predictedLabel", labels=labelIndexer.labels)

# chain all the estimators and transformers stages into a Pipeline estimator
rfPipeline = Pipeline(stages = stIndexers + [encoder, assembler, featureIndexer, labelIndexer, rf, labelConverter])

# split data, cache them
training, test = my_data.randomSplit([0.7, 0.3], seed = 100)
training.cache()
test.cache()

# fit the estimator with training dataset to get a compiled pipeline with transformers and fitted models.
ModelRF = rfPipeline.fit(training)

# make predictions
predictions = ModelRF.transform(test)
predictions.printSchema()
predictions.show(5)

所以我的问题是:即使我在 VectorIndexer 中将 maxCategories 设置为 30,为什么我的数据中仍然存在高级别的分类特征。我可以将 rf 分类器中的 maxBins 设置为更高的值,但我只是好奇:为什么 VectorIndexer 没有按预期工作(嗯,正如我预期的那样):将小于 maxCategories 的特征转换为分类特征,将大于特征的特征转换为数字特征。

最佳答案

看起来与文档相反,文档列出了:

Preserve metadata in transform; if a feature's metadata is already present, do not recompute.

在TODO中,元数据已经被保留。

from pyspark.sql.functions import col
from pyspark.ml import Pipeline
from pyspark.ml.feature import *

df = spark.range(10)

stages = [StringIndexer(inputCol="id", outputCol="idx"), VectorAssembler(inputCols=["idx"], outputCol="features"), VectorIndexer(inputCol="features", outputCol="features_indexed", maxCategories=5)]
Pipeline(stages=stages).fit(df).transform(df).schema["features"].metadata
# {'ml_attr': {'attrs': {'nominal': [{'vals': ['8',
# '4',
# '9',
# '5',
# '6',
# '1',
# '0',
# '2',
# '7',
# '3'],
# 'idx': 0,
# 'name': 'idx'}]},
# 'num_attrs': 1}}

Pipeline(stages=stages).fit(df).transform(df).schema["features_indexed"].metadata

# {'ml_attr': {'attrs': {'nominal': [{'ord': False,
# 'vals': ['0.0',
# '1.0',
# '2.0',
# '3.0',
# '4.0',
# '5.0',
# '6.0',
# '7.0',
# '8.0',
# '9.0'],
# 'idx': 0,
# 'name': 'idx'}]},
# 'num_attrs': 1}}

在正常情况下,这是期望的行为。您不应使用索引分类特征作为连续变量

但如果仍然想规避此行为,您必须重置元数据,例如:

pipeline1 = Pipeline(stages=stages[:1])
pipeline2 = Pipeline(stages=stages[1:])

dft1 = pipeline1.fit(df).transform(df).withColumn("idx", col("idx").alias("idx", metadata={}))
dft2 = pipeline2.fit(dft1).transform(dft1)


dft2.schema["features_indexed"].metadata

# {'ml_attr': {'attrs': {'numeric': [{'idx': 0, 'name': 'idx'}]},
# 'num_attrs': 1}}

关于apache-spark - 在 pyspark.ml 中使用 RandomForestClassifier 时,maxCategories 在 VectorIndexer 中无法按预期工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50467666/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com