- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从此处 (Learning Spark book) 运行示例,但出现以下错误:
16/10/07 01:15:26 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
我的工作开始于:$SPARK_HOME/bin/spark-submit --classcom.oreilly.learningsparkexamples.mini.java.WordCount ./target/learning-spark-mini-example-0.0.1.jar ./README.md ./wordcounts
请指教,为什么会这样?
完整日志:
mini-complete-example$ $SPARK_HOME/bin/spark-submit --class com.oreilly.learningsparkexamples.mini.java.WordCount ./target/learning-spark-mini-example-0.0.1.jar ./README.md ./wordcounts
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/07 01:15:23 INFO SparkContext: Running Spark version 2.0.0
16/10/07 01:15:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/07 01:15:24 INFO SecurityManager: Changing view acls to: eDS
16/10/07 01:15:24 INFO SecurityManager: Changing modify acls to: eDS
16/10/07 01:15:24 INFO SecurityManager: Changing view acls groups to:
16/10/07 01:15:24 INFO SecurityManager: Changing modify acls groups to:
16/10/07 01:15:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(eDS); groups with view permissions: Set(); users with modify permissions: Set(eDS); groups with modify permissions: Set()
16/10/07 01:15:24 INFO Utils: Successfully started service 'sparkDriver' on port 63851.
16/10/07 01:15:24 INFO SparkEnv: Registering MapOutputTracker
16/10/07 01:15:24 INFO SparkEnv: Registering BlockManagerMaster
16/10/07 01:15:24 INFO DiskBlockManager: Created local directory at /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/blockmgr-0fb2af5a-8662-4d78-88c8-8e0608f35ff3
16/10/07 01:15:24 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
16/10/07 01:15:24 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/07 01:15:24 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/07 01:15:24 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.17:4040
16/10/07 01:15:24 INFO SparkContext: Added JAR file:/Users/eDS/dev/learning-spark/mini-complete-example/./target/learning-spark-mini-example-0.0.1.jar at spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar with timestamp 1475795724857
16/10/07 01:15:24 INFO Executor: Starting executor ID driver on host localhost
16/10/07 01:15:24 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63852.
16/10/07 01:15:24 INFO NettyBlockTransferService: Server created on 192.168.1.17:63852
16/10/07 01:15:24 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.17, 63852)
16/10/07 01:15:24 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.17:63852 with 366.3 MB RAM, BlockManagerId(driver, 192.168.1.17, 63852)
16/10/07 01:15:24 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.17, 63852)
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 145.5 KB, free 366.2 MB)
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 16.3 KB, free 366.1 MB)
16/10/07 01:15:25 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.17:63852 (size: 16.3 KB, free: 366.3 MB)
16/10/07 01:15:25 INFO SparkContext: Created broadcast 0 from textFile at WordCount.java:31
16/10/07 01:15:25 INFO FileInputFormat: Total input paths to process : 1
16/10/07 01:15:25 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/10/07 01:15:25 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/10/07 01:15:25 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/10/07 01:15:25 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/10/07 01:15:25 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/10/07 01:15:25 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
16/10/07 01:15:25 INFO SparkContext: Starting job: saveAsTextFile at WordCount.java:46
16/10/07 01:15:25 INFO DAGScheduler: Registering RDD 3 (mapToPair at WordCount.java:39)
16/10/07 01:15:25 INFO DAGScheduler: Got job 0 (saveAsTextFile at WordCount.java:46) with 2 output partitions
16/10/07 01:15:25 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.java:46)
16/10/07 01:15:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/10/07 01:15:25 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/10/07 01:15:25 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at WordCount.java:39), which has no missing parents
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.9 KB, free 366.1 MB)
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 366.1 MB)
16/10/07 01:15:25 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.17:63852 (size: 2.7 KB, free: 366.3 MB)
16/10/07 01:15:25 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1012
16/10/07 01:15:25 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at WordCount.java:39)
16/10/07 01:15:25 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/10/07 01:15:25 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0, PROCESS_LOCAL, 5479 bytes)
16/10/07 01:15:25 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1, PROCESS_LOCAL, 5479 bytes)
16/10/07 01:15:26 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/10/07 01:15:26 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/10/07 01:15:26 INFO Executor: Fetching spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar with timestamp 1475795724857
16/10/07 01:15:26 INFO TransportClientFactory: Successfully created connection to /192.168.1.17:63851 after 73 ms (0 ms spent in bootstraps)
16/10/07 01:15:26 INFO Utils: Fetching spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar to /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211/userFiles-2a2cf77d-5794-4006-a125-5df94550cbf8/fetchFileTemp6160431711224595115.tmp
16/10/07 01:15:26 INFO Executor: Adding file:/private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211/userFiles-2a2cf77d-5794-4006-a125-5df94550cbf8/learning-spark-mini-example-0.0.1.jar to class loader
16/10/07 01:15:26 INFO HadoopRDD: Input split: file:/Users/eDS/dev/learning-spark/mini-complete-example/README.md:66+66
16/10/07 01:15:26 INFO HadoopRDD: Input split: file:/Users/eDS/dev/learning-spark/mini-complete-example/README.md:0+66
16/10/07 01:15:26 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/07 01:15:26 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1385 bytes result sent to driver
16/10/07 01:15:26 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 307 ms on localhost (1/2)
16/10/07 01:15:26 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/07 01:15:26 INFO SparkContext: Invoking stop() from shutdown hook
16/10/07 01:15:26 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/07 01:15:26 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
16/10/07 01:15:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/10/07 01:15:26 INFO TaskSchedulerImpl: Cancelling stage 0
16/10/07 01:15:26 INFO SparkUI: Stopped Spark web UI at http://192.168.1.17:4040
16/10/07 01:15:26 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at WordCount.java:39) failed in 0.364 s
16/10/07 01:15:26 INFO DAGScheduler: Job 0 failed: saveAsTextFile at WordCount.java:46, took 0.448155 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1904)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1219)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1064)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1030)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:956)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:955)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1440)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1419)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1419)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1419)
at org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:549)
at org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45)
at com.oreilly.learningsparkexamples.mini.java.WordCount.main(WordCount.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/07 01:15:26 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(0,1475795726327,JobFailed(org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:))
16/10/07 01:15:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/07 01:15:26 INFO MemoryStore: MemoryStore cleared
16/10/07 01:15:26 INFO BlockManager: BlockManager stopped
16/10/07 01:15:26 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/07 01:15:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/07 01:15:26 INFO SparkContext: Successfully stopped SparkContext
16/10/07 01:15:26 INFO ShutdownHookManager: Shutdown hook called
16/10/07 01:15:26 INFO ShutdownHookManager: Deleting directory /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211
mini-complete-example$
最佳答案
我遇到了同样的错误:
ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
和
java.lang.AbstractMethodError: ...
首先,在以下路径中检查 spark 版本:
%SPARK_HOME%/jars/spark-core_x.xx-y.y.y.jar
并修改pom.xml
文件如下图:
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_x.xx</artifactId>
<version>y.y.y</version>
<scope>provided</scope>
</dependency>
希望对像我这样的人有所帮助。
关于apache-spark - Apache 星火 : ERROR Executor -> Iterator,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39907104/
我正在尝试表达以下内容: 给定一个矩阵和两个索引增量,返回矩阵中所有数字的四倍体:沿行,列或对角线的四倍体。 use std::iter::Iterator; use std::iter::Peeka
假设我们有以下类组成角色 Iterable : class Word-Char does Iterable { has @.words; method !pairize($item)
我编写了一个 ADT 排序二叉树,其功能如下: public Iterator getInorderIterator(){ return new InorderIterator(); } 有效
在包装(内部)迭代器时,通常必须将 __iter__ 方法重新路由到底层可迭代对象。考虑以下示例: class FancyNewClass(collections.Iterable): def
尽管如此,我遍历了以下 NSSet , NSMutableArray , NSFastEnumeration文档,我找不到下面提到的场景的令人满意的来源: 此处,NSMutableArray、NSAr
我发现在 Python 中 collections.Iterable 和 typing.Iterable 都可以用于类型注释和检查对象是否可迭代,即 >isinstance(obj, collecti
我想拆分实现 Iterator 的对象的输出分为两个实现 Iterator 的对象和 Iterator .由于其中一个输出的迭代次数可能比另一个多,因此我需要缓冲 Iterator 的输出。 (因为我
我正在尝试用 Rust 编写一个简单的迭代器: #[derive(Debug)] pub struct StackVec { storage: &'a mut [T], len: us
什么意思: Separator.Iterator.Element == Self.Iterator.Element.Iterator.Element 在this (Swift 标准库)swift 实例
调用 anIterable.iterator() 会返回新的迭代器还是现有的迭代器?它依赖于 Iterable 的实现吗? 更具体地说,以下代码是否按预期工作(即内部循环将从头开始迭代)? for (
我正在尝试转换 &str 的矢量对成一个 HashMap使用以下代码片段: use std::collections::HashMap; fn main() { let pairs = vec!(
这将使安全地迭代同一元素两次成为可能,或者为在项目类型中迭代的全局事物保持某种状态。 类似于: trait IterShort where Self: Borrow, { type I
我在 String 的字符上使用迭代器: pub fn is_yelling(message: &str) -> bool { let letters = message.chars().fi
这将使安全地迭代同一元素两次成为可能,或者为在项目类型中迭代的全局事物保持某种状态。 类似于: trait IterShort where Self: Borrow, { type I
要在 Rust 中实现迭代器,我们只需要实现 next 方法,如 in the documentation 所解释的那样.但是,Iterator 特征 has many more methods .
我正在为多个结构实现 Iterator 特性并遇到了一些问题。为什么为 Rows 实现 Iterator 显示错误?这是一个链接:link to playground 基本上为什么这不起作用? str
我将集合转储到磁盘上。当请求时,应该检索这些集合(没问题)和 iterator应该为它构建返回对检索到的值的引用。 iterator之后被丢弃了,我不再需要收藏了。我也希望它被删除。 到目前为止我尝试
我正在尝试为实现特征的结构实现默认迭代器。我的特征称为 DataRow,代表一行表格单元格,如下所示: pub trait DataRow { // Gets a cell by index
Rust 中是否有提供 iter() 的 Trait方法?我只找到了特征 IntoIterator ,供应into_iter() . 这里要明确一点:我不想要 Iterator特性,提供 next()
我想在迭代器上定义一个 .unique() 方法,使我能够在没有重复的情况下进行迭代。 use std::collections::HashSet; struct UniqueState {
我是一名优秀的程序员,十分优秀!