gpt4 book ai didi

java - Spark本地读取文本文件在线程 "main"org.apache.spark.SparkException : Task not serializable中引发异常

转载 作者:行者123 更新时间:2023-11-30 05:21:33 34 4
gpt4 key购买 nike

我正在用 java 编写我的第一个 Spark 程序,但无法找出以下错误。我已经解决了很多有关堆栈溢出的问题,但他们认为与我的问题无关。我正在尝试使用最新版本的spark 2.4.4。我正在本地运行我的应用程序

这是我的程序

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

public class SparkTextFile {

public static void main(String args[]) {

SparkConf conf = new SparkConf().setAppName("textfilereading").setMaster("local[*]");
JavaSparkContext context = new JavaSparkContext(conf);
JavaRDD<String> textRDD = context.textFile("/Users/user/Downloads/AccountHistory.csv");
textRDD.foreach(System.out::println);
context.close();

}

}

这是pom文件中的依赖项

<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.4</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.codehaus.janino</groupId>
<artifactId>commons-compiler</artifactId>
</exclusion>
<exclusion>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
</exclusion>
</exclusions>
</dependency>

<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>commons-compiler</artifactId>
<version>3.0.8</version>
</dependency>
<dependency>
<groupId>org.codehaus.janino</groupId>
<artifactId>janino</artifactId>
<version>3.0.8</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.2.1</version>
</dependency>

这是我收到的错误。

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:403)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:393)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
at org.apache.spark.rdd.RDD.$anonfun$foreach$1(RDD.scala:926)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:925)
at org.apache.spark.api.java.JavaRDDLike.foreach(JavaRDDLike.scala:351)
at org.apache.spark.api.java.JavaRDDLike.foreach$(JavaRDDLike.scala:350)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:45)

Caused by: java.io.NotSerializableException: java.io.PrintStream
Serialization stack:
- object not serializable (class: java.io.PrintStream, value: java.io.PrintStream@4df39a88)
- element of array (index: 0)
- array (class [Ljava.lang.Object;, size 1)
- field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
- object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class com.SparkTextFile, functionalInterfaceMethod=org/apache/spark/api/java/function/VoidFunction.call:(Ljava/lang/Object;)V, implementation=invokeVirtual java/io/PrintStream.println:(Ljava/lang/String;)V, instantiatedMethodType=(Ljava/lang/String;)V, numCaptured=1])
- writeReplace data (class: java.lang.invoke.SerializedLambda)
- object (class com.SparkTextFile$$Lambda$622/1779219567, com.SparkTextFile$$Lambda$622/1779219567@a137d7a)
- element of array (index: 0)
- array (class [Ljava.lang.Object;, size 1)
- field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
- object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=interface org.apache.spark.api.java.JavaRDDLike, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic org/apache/spark/api/java/JavaRDDLike.$anonfun$foreach$1$adapted:(Lorg/apache/spark/api/java/function/VoidFunction;Ljava/lang/Object;)Ljava/lang/Object;, instantiatedMethodType=(Ljava/lang/Object;)Ljava/lang/Object;, numCaptured=1])
- writeReplace data (class: java.lang.invoke.SerializedLambda)
- object (class org.apache.spark.api.java.JavaRDDLike$$Lambda$623/1871259950, org.apache.spark.api.java.JavaRDDLike$$Lambda$623/1871259950@4ab550d5)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:400)
... 12 more

我不确定为什么会出现这个错误,因为除了从文件中读取之外,我没有使用任何对象进行序列化。

我改变了下面的行

textRDD.foreach(System.out::println);

textRDD.collect().forEach(System.out::println);

添加了收集以查看会输出什么,现在我看到不同的错误消息。

Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.mapred.FileInputFormat
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:312)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:253)
at scala.Option.getOrElse(Option.scala:138)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:253)
at scala.Option.getOrElse(Option.scala:138)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.java.JavaRDDLike.collect(JavaRDDLike.scala:361)
at org.apache.spark.api.java.JavaRDDLike.collect$(JavaRDDLike.scala:360)
at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)

我也不明白上面的错误是什么。有人可以提供有关如何理解该错误以及如何修复它的信息吗?

最佳答案

(免责声明:不是 Java 开发人员。因此,将尝试根据 Scala 的经验来回答。)

您在这里使用了更高阶的函数 - foreach。高阶函数将“序列化”提供给它们的参数并将它们发送到 RDD 的分区(通常分布在网络上的计算机上)。我不确定 System.out.println 是否是“可序列化的” Java 中的对象”。因此,其中一种方法可能是在 Java 中使用 Lambda 表示法,并将上面的代码更改如下:

textRDD.foreach( (s) -> System.out.println(s) )

希望对您有帮助! :)

关于java - Spark本地读取文本文件在线程 "main"org.apache.spark.SparkException : Task not serializable中引发异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59495465/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com