gpt4 book ai didi

scala - CrossValidator 不支持 VectorUDT 作为 spark-ml 中的标签

转载 作者:行者123 更新时间:2023-12-01 23:37:50 24 4
gpt4 key购买 nike

我在使用一个热编码器时遇到 scala spark 中的 ml.crossvalidator 问题。

这是我的代码

val tokenizer = new Tokenizer().
setInputCol("subjects").
setOutputCol("subject")

//CountVectorizer / TF
val countVectorizer = new CountVectorizer().
setInputCol("subject").
setOutputCol("features")

// convert string into numerical values
val labelIndexer = new StringIndexer().
setInputCol("labelss").
setOutputCol("labelsss")

// convert numerical to one hot encoder
val labelEncoder = new OneHotEncoder().
setInputCol("labelsss").
setOutputCol("label")

val logisticRegression = new LogisticRegression()

val pipeline = new Pipeline().setStages(Array(tokenizer,countVectorizer,labelIndexer,labelEncoder,logisticRegression))

然后给我这样的错误

cv: org.apache.spark.ml.tuning.CrossValidator = cv_8cc1ae985e39
java.lang.IllegalArgumentException: requirement failed: Column label must be of type NumericType but was actually of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7.

我不知道如何解决。

我需要一个热编码器,因为我的标签是绝对的。

谢谢你帮助我:)

最佳答案

实际上没有必要对标签 (目标变量) 使用 OneHotEncoder/OneHotEncoderEstimator,您实际上也不应该这样做。这将创建一个向量(type org.apache.spark.ml.linalg.VectorUDT)。

StringIndexer 足以定义您的标签是分类的。

让我们用一个小例子来验证一下:

val df = Seq((0, "a"),(1, "b"),(2, "c"),(3, "a"),(4, "a"),(5, "c")).toDF("category", "text")
// df: org.apache.spark.sql.DataFrame = [category: int, text: string]

val indexer = new StringIndexer().setInputCol("category").setOutputCol("categoryIndex").fit(df)
// indexer: org.apache.spark.ml.feature.StringIndexerModel = strIdx_cf691c087e1d

val indexed = indexer.transform(df)
// indexed: org.apache.spark.sql.DataFrame = [category: int, text: string ... 1 more field]

indexed.schema.map(_.metadata).foreach(println)
// {}
// {}
// {"ml_attr":{"vals":["4","5","1","0","2","3"],"type":"nominal","name":"categoryIndex"}}

正如您所注意到的,StringIndexer 实际上将元数据附加到该列 (categoryIndex) 并将其标记为nominal a.k.a categorical.

您还可以注意到,在列的属性中,您有类别列表。

在我关于 How to handle categorical features with spark-ml? 的其他回答中有更多相关信息

关于使用 spark-ml 准备数据元数据,我强烈建议您阅读以下条目:

https://github.com/awesome-spark/spark-gotchas/blob/5ad4c399ffd2821875f608be8aff9f1338478444/06_data_preparation.md

免责声明:我是链接中条目的合著者。

注意:(文档摘录)

Because this existing OneHotEncoder is a stateless transformer, it is not usable on new data where the number of categories may differ from the training data. In order to fix this, a new OneHotEncoderEstimator was created that produces an OneHotEncoderModel when fitting. For more detail, please see SPARK-13030.

OneHotEncoder has been deprecated in 2.3.0 and will be removed in 3.0.0. Please use OneHotEncoderEstimator instead.

关于scala - CrossValidator 不支持 VectorUDT 作为 spark-ml 中的标签,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50598738/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com