gpt4 book ai didi

java - 从 Java Spark 读取时未读 block 数据

转载 作者:可可西里 更新时间:2023-11-01 15:26:12 24 4
gpt4 key购买 nike

我试图从 HDFS 和/或文件系统中读取一些文件,但我得到了这个异常

    Driver stacktrace:]
[unread block data]
]org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, C-4073.CM.ES, executor 1): java.lang.IllegalStateException: unread block data
at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:222)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

从HDFS读取代码:

    JavaRDD<String> textFile = sc.textFile("file:///PathToFile");

从文件系统读取的代码:

    JavaRDD<String> textFile=sc.textFile("hdfs:///PathToFile");

我一直在寻找,用户通常说这可能是由于Java版本不同而导致的错误,但我已经检查过了:

我的集群:

$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

我的本​​地机器:

$ java -version
java version "1.7.0_76"
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)

我的 pom.xml:

<properties>
<jdk.version>1.7</jdk.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<springboot.version>1.5.3.RELEASE</springboot.version>
<springint.version>4.3.10.RELEASE</springint.version>
<cdh.version>5.10.1</cdh.version>
<solr.version>4.10.3-cdh${cdh.version}</solr.version>
<hbase.version>1.2.0-cdh${cdh.version}</hbase.version>
<kafka.version>0.9.0-kafka-2.0.2</kafka.version>
<rt-framework.version>2.3.5</rt-framework.version>
<tas.version>4.0.0</tas.version>
</properties>

我不确定我的问题是否与 Java 版本有关,因为我从 kafka 写入/读取或从 hive 查询都没有问题。

提前谢谢你,抱歉我的英语不好。

最佳答案

好的,这两行非常不同。

JavaRDD<String> textFile = sc.textFile("file:///PathToFile");
JavaRDD<String> textFile=sc.textFile("hdfs:///PathToFile");

第一行(“file:///...”)假设您的文件在同一位置下的所有机器上都可用,并且这些文件实际上完全相同。否则在分区/读取过程中会发生各种令人毛骨悚然的事情。

第二行表示您尝试从预配置的 HDFS 中读取,实际上是可以的。

如果你想在master机器上读取一些本地文件,只需做这样的事情:

List<String> myData = ...
JavaRDD<String> myRdd = sc.parallelize(myData);

此处提供更多详细信息:https://spark.apache.org/docs/2.2.0/api/java/org/apache/spark/SparkContext.html#parallelize-scala.collection.Seq-int-scala.reflect.ClassTag-

关于java - 从 Java Spark 读取时未读 block 数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46809973/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com