gpt4 book ai didi

java - Spark 序列化的奇怪之处

转载 作者:行者123 更新时间:2023-11-30 08:10:25 25 4
gpt4 key购买 nike

我在使用 JavaPairRdd.repartitionAndrepartitionAndSortWithinPartitions 方法时遇到了 Spark 问题。我已经尝试了任何有理智的人会想到的一切。我终于写了一个足够简单的小片段来可视化问题:

public class Main {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);

final List<String> list = Arrays.asList("I", "am", "totally", "baffled");
final HashPartitioner partitioner = new HashPartitioner(2);

doSomething(sc, list, partitioner, String.CASE_INSENSITIVE_ORDER);
doSomething(sc, list, partitioner, Main::compareString);
doSomething(sc, list, partitioner, new StringComparator());
doSomething(sc, list, partitioner, new SerializableStringComparator());
doSomething(sc, list, partitioner, (s1,s2) -> Integer.compare(s1.charAt(0),s2.charAt(0)));
}

public static <T> void doSomething(JavaSparkContext sc, List<T> list, Partitioner partitioner, Comparator<T> comparator) {
try {
sc.parallelize(list)
.mapToPair(elt -> new Tuple2<>(elt,elt))
.repartitionAndSortWithinPartitions(partitioner,comparator)
.count();
System.out.println("success");
} catch (Exception e) {
System.out.println("failure");
}
}

public static int compareString(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}

public static class StringComparator implements Comparator<String> {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}

public static class SerializableStringComparator implements Comparator<String>, Serializable {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}
}

除了 Spark 日志记录之外,它还输出:

success
failure
failure
success
failure

失败时抛出的异常总是相同的:

org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:835)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:781)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:780)
scala.collection.immutable.List.foreach(List.scala:318)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:781)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:780)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

现在我已经解决了:将我的自定义比较器声明为可序列化(我检查了标准库代码,不区分字符串大小写的比较器被声明为可序列化,因此这是有意义的)。

但是为什么呢?为什么我不应该在这里使用 lambda 表达式?我本来希望第二个和最后一个能够正常工作,因为我只使用静态方法和类。

我发现特别奇怪的是,我已经注册了我试图序列化到 Kryo 的类,而我没有注册的类可以使用它们的默认关联序列化器轻松序列化(Kryo 关联 FieldSerializer作为大多数类的默认类)。但是,在任务序列化失败之前,Kryo 注册器永远不会执行。

最佳答案

我的问题并没有清楚地说明为什么我如此困惑(关于 Kryo 注册代码没有被执行),所以我对其进行了编辑以反射(reflect)它。

我发现 Spark 使用两种不同的序列化器:

  • 用于将任务从主站序列化到从站的任务,在代码中称为closureSerializer(请参阅SparkEnv.scala)。它只能在我发帖时设置为 JavaSerializer

  • 用于序列化所处理的实际数据,在 SparkEnv 中称为serializer。可以将其设置为 JavaSerializer 或 KryoSerializer。

将类注册到 Kryo 并不能确保它始终使用 Kryo 进行序列化,这取决于您如何使用它。例如,DAGScheduler 仅使用 closureSerializer,因此无论您如何配置序列化,如果对象由 操作,您始终需要将它们设为 Java 可序列化DAGScheduler 在某个时刻(除非 Spark 在后续版本中为闭包启用 Kryo 序列化)。

关于java - Spark 序列化的奇怪之处,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30471433/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com