gpt4 book ai didi

scala - 如何从包含枚举的案例类创建 Spark 数据集或数据框

转载 作者:行者123 更新时间:2023-12-01 23:53:16 25 4
gpt4 key购买 nike

我一直在尝试使用包含枚举的案例类创建 Spark 数据集,但我无法做到。我正在使用 Spark 版本 1.6.0。异常(exception)情况是提示没有为我的枚举找到编码器。 Spark 中不可能在数据中包含枚举吗?

代码:

import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}

object MyEnum extends Enumeration {
type MyEnum = Value
val Hello, World = Value
}

case class MyData(field: String, other: MyEnum.Value)

object EnumTest {

def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf().setAppName("test").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val sqlCtx = new SQLContext(sc)

import sqlCtx.implicits._

val df = sc.parallelize(Array(MyData("hello", MyEnum.World))).toDS()

println(s"df: ${df.collect().mkString(",")}}")
}

}

错误:

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for com.company.MyEnum.Value
- field (class: "scala.Enumeration.Value", name: "other")
- root class: "com.company.MyData"
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:597)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:509)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:502)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:502)
at org.apache.spark.sql.catalyst.ScalaReflection$.extractorsFor(ScalaReflection.scala:394)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:54)
at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:41)
at com.company.EnumTest$.main(EnumTest.scala:22)
at com.company.EnumTest.main(EnumTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

最佳答案

您可以创建自己的编码器:

import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}

object MyEnum extends Enumeration {
type MyEnum = Value
val Hello, World = Value
}

case class MyData(field: String, other: MyEnum.Value)

object MyDataEncoders {
implicit def myDataEncoder: org.apache.spark.sql.Encoder[MyData] =
org.apache.spark.sql.Encoders.kryo[MyData]
}

object EnumTest {
import MyDataEncoders._

def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf().setAppName("test").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val sqlCtx = new SQLContext(sc)

import sqlCtx.implicits._

val df = sc.parallelize(Array(MyData("hello", MyEnum.World))).toDS()

println(s"df: ${df.collect().mkString(",")}}")
}
}

关于scala - 如何从包含枚举的案例类创建 Spark 数据集或数据框,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39661111/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com