gpt4 book ai didi

apache-spark - Spark 异常 : Values to assemble cannot be null

转载 作者:行者123 更新时间:2023-12-04 11:13:21 32 4
gpt4 key购买 nike

我要使用 StandardScaler来规范化特征。

这是我的代码:

val Array(trainingData, testData) = dataset.randomSplit(Array(0.7,0.3))
val vectorAssembler = new VectorAssembler().setInputCols(inputCols).setOutputCol("features").transform(trainingData)
val stdscaler = new StandardScaler().setInputCol("features").setOutputCol("scaledFeatures").setWithStd(true).setWithMean(false).fit(vectorAssembler)

但是当我尝试使用 StandardScaler 时它抛出了一个异常
[Stage 151:==>                                                    (9 + 2) / 200]16/12/28 20:13:57 WARN scheduler.TaskSetManager: Lost task 31.0 in stage 151.0 (TID 8922, slave1.hadoop.ml): org.apache.spark.SparkException: Values to assemble cannot be null.
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:159)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:142)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:142)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:98)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:97)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
VectorAssembler有什么问题吗? ?

我检查了几行 VectorAssembler看起来还可以。

vectorAssembler.take(5)

最佳答案

Spark >= 2.4

自 Spark 2.4 VectorAssembler扩展 HasHandleInvalid .这意味着您可以 skip :

assembler.setHandleInvalid("skip").transform(df).show

+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+
keep (请注意,ML 算法不太可能正确处理此问题):
assembler.setHandleInvalid("keep").transform(df).show

+----+----+---------+
| x1| x2| features|
+----+----+---------+
| 1.0|null|[1.0,NaN]|
|null| 2.0|[NaN,2.0]|
| 3.0| 4.0|[3.0,4.0]|
+----+----+---------+

或默认为 error .

Spark < 2.4
VectorAssembler没什么问题. Spark Vector只是不能包含 null值。

import org.apache.spark.ml.feature.VectorAssembler

val df = Seq(
(Some(1.0), None), (None, Some(2.0)), (Some(3.0), Some(4.0))
).toDF("x1", "x2")

val assembler = new VectorAssembler()
.setInputCols(df.columns).setOutputCol("features")

assembler.transform(df).show(3)

org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (struct<x1:double,x2:double>) => vector)
...
Caused by: org.apache.spark.SparkException: Values to assemble cannot be null.

Null 对 ML 算法没有意义,不能用 scala.Double 表示。 .

你必须要么放弃:

assembler.transform(df.na.drop).show(2)

+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+

或填充/估算(另见 Replace missing values with mean - Spark Dataframe ):

// For example with averages
val replacements: Map[String,Any] = Map("x1" -> 2.0, "x2" -> 3.0)
assembler.transform(df.na.fill(replacements)).show(3)

+---+---+---------+
| x1| x2| features|
+---+---+---------+
|1.0|3.0|[1.0,3.0]|
|2.0|2.0|[2.0,2.0]|
|3.0|4.0|[3.0,4.0]|
+---+---+---------+
nulls .

关于apache-spark - Spark 异常 : Values to assemble cannot be null,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41362295/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com