gpt4 book ai didi

scala - Spark 任务不可序列化

转载 作者:行者123 更新时间:2023-12-02 09:16:22 28 4
gpt4 key购买 nike

我已经尝试了在 StackOverflow 上找到的该问题的所有解决方案,但尽管如此,我还是无法解决它。我有一个“MainObj”对象,它实例化“推荐”对象。当我调用“recommendationProducts”方法时,我总是收到错误。该方法的代码如下:

def recommendationProducts(item: Int): Unit = {

val aMatrix = new DoubleMatrix(Array(1.0, 2.0, 3.0))

def cosineSimilarity(vec1: DoubleMatrix, vec2: DoubleMatrix): Double = {
vec1.dot(vec2) / (vec1.norm2() * vec2.norm2())
}

val itemFactor = model.productFeatures.lookup(item).head
val itemVector = new DoubleMatrix(itemFactor)

//Here is where I get the error:
val sims = model.productFeatures.map { case (id, factor) =>
val factorVector = new DoubleMatrix(factor)
val sim = cosineSimilarity(factorVector, itemVector)
(id, sim)
}

val sortedSims = sims.top(10)(Ordering.by[(Int, Double), Double] {
case (id, similarity) => similarity
})

println("\nTop 10 products:")
sortedSims.map(x => (x._1, x._2)).foreach(println)

这是错误:

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.map(RDD.scala:369)
at RecommendationObj.recommendationProducts(RecommendationObj.scala:269)
at MainObj$.analisiIUNGO(MainObj.scala:257)
at MainObj$.menu(MainObj.scala:54)
at MainObj$.main(MainObj.scala:37)
at MainObj.main(MainObj.scala)
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@7c2312fa)
- field (class: RecommendationObj, name: sc, type: class org.apache.spark.SparkContext)
- object (class MainObj$$anon$1, MainObj$$anon$1@615bad16)
- field (class: RecommendationObj$$anonfun$37, name: $outer, type: class RecommendationObj)
- object (class RecommendationObj$$anonfun$37, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 14 more

我尝试添加:1)“将 Serialized 扩展”(Scala)到我的类2)“扩展java.io.Serialized”到我的类3)“@transient”到某些部分4)获取此类中的模型(和其他功能)(现在我从其他对象获取它们,然后将它们像参数一样传递给我的类)

如何解决?我快要疯了!预先感谢您!

最佳答案

key 在这里:

 field (class: RecommendationObj, name: sc, type: class org.apache.spark.SparkContext)

因此,您有一个名为 sc 且类型为 SparkContext 的字段。 Spark想要序列化该类,因此他尝试也序列化所有字段。

你应该:

  • 使用@transient注释并检查是否为null,然后重新创建
  • 不使用字段中的 SparkContext,而是将其放入方法的参数中。但请记住,您永远不应该在 map、flatMap 等的闭包内使用 SparkContext。

关于scala - Spark 任务不可序列化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46933930/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com