- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
重新启动 Spark HistoryServer 后,它无法启动,我们正在使用 CDH 5.3.1 和 Spark 1.2我检查了 Spark HistoryServer 的日志,发现以下消息:
2015-05-21 11:38:03,790 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files.
2015-05-21 11:38:52,319 INFO org.apache.spark.deploy.history.HistoryServer: Registered signal handlers for [TERM, HUP, INT]
2015-05-21 11:38:52,328 WARN org.apache.spark.deploy.history.HistoryServerArguments: Setting log directory through the command line is deprecated as of Spark 1.1.0. Please set this through spark.history.fs.logDirectory instead.
2015-05-21 11:38:52,461 INFO org.apache.spark.SecurityManager: Changing view acls to: spark
2015-05-21 11:38:52,462 INFO org.apache.spark.SecurityManager: Changing modify acls to: spark
2015-05-21 11:38:52,463 INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
2015-05-21 11:41:24,893 ERROR org.apache.spark.deploy.history.HistoryServer: RECEIVED SIGNAL 15: SIGTERM
2015-05-21 11:41:33,439 INFO org.apache.spark.deploy.history.HistoryServer: Registered signal handlers for [TERM, HUP, INT]
2015-05-21 11:41:33,447 WARN org.apache.spark.deploy.history.HistoryServerArguments: Setting log directory through the command line is deprecated as of Spark 1.1.0. Please set this through spark.history.fs.logDirectory instead.
2015-05-21 11:41:33,578 INFO org.apache.spark.SecurityManager: Changing view acls to: spark
2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: Changing modify acls to: spark
2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
2015-05-21 11:44:07,147 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:44:07,151 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.1.1.253:50010 for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:44:07,161 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /10.1.1.190:50010 for BP-1877157801-10.1.1.42-1366756660926:blk_1104141398_1099642456200
2015-05-21 11:44:19,946 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:44:19,947 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.1.1.253:50010 for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69)
at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:44:19,950 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /10.1.1.35:50010 for BP-1877157801-10.1.1.42-1366756660926:blk_1104192564_1099642507371
还有
2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files.
2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files.
2015-05-21 11:38:03,789 ERROR org.apache.spark.deploy.history.FsHistoryProvider: Exception in accessing modification time of hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_11706
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$getModificationTime(FsHistoryProvider.scala:236)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:182)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:38:03,790 ERROR org.apache.spark.scheduler.EventLoggingListener: Exception in parsing logging info from directory hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_12048
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712)
at org.apache.spark.scheduler.EventLoggingListener$.parseLoggingInfo(EventLoggingListener.scala:199)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$createReplayBus(FsHistoryProvider.scala:226)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:174)
at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
我尝试将/user/spark/applicationHistory 复制到某个位置并在配置中指定新位置,但这没有用。
知道这里发生了什么吗?
最佳答案
清理“/user/spark/applicationHistory/*”后问题得到解决
关于hadoop - Spark HistoryServer 未启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30381236/
每当我运行命令以将 Virtualbox 驱动程序启动 Minishift 到操作系统主机时,它都需要一段疯狂的时间,而且它永远不会结束。有时我什至收到有关达到存储限制的错误消息。 不知道是不是描述h
您好,我正在使用 npm 运行一个基本的 React 项目,我正尝试在 docker 容器中启动它。但是我实际上无法让项目运行。我的 dockerfile 看起来像这样: FROM node:7.8.
所以我想从我的 SSH 终端开始游戏。 这真的很奇怪,当我直接从 Linux GUI 执行此操作时,它可以工作。但是当我使用 SSH 客户端进行远程连接时,它就崩溃了。似乎与我的显示驱动程序有关。 U
我有一个显示图像的动态壁纸。我在 Activity 中更改了该图像。然后我需要通知动态壁纸,以便它知道重新加载资源。 Intent 似乎是完美、简单的解决方案: Intent intent = new
我有一个似乎无法解决的问题。我在 Boot Dashboard 中使用 STS 3.9.2 从 Eclipse (Oxygen) 启动 Spring Boot 应用程序没有任何问题: 但是,当我尝试从
全新的 Python,在我开始摆弄东西之前先设置和安装东西。我的理解是 Python 2.7 和 Python 3.3 之间存在一些显着差异/不兼容,尽管这两个版本都得到了很好的使用,所以我认为最好安
在使用了很长时间的 jQuery 之后,我有一个问题,我正在使用 jQuery 模式(样式)编写一个简单的代码, (function(window, undefined) { var jQu
我正在尝试在 spring boot 应用程序下的非 spring 托管类中配置 Autowired。我在 tomcat 服务器下部署的 Web 应用程序下成功运行了这个。但是当我想在 spring
我对 xmonad 完全陌生,但我想开始使用它来提高我的工作效率。 这是我一直在使用的指南(我使用的是 Apple OS X Snow Leopard) http://xmonad.org/tour.
我试图将Spring Boot指南中的Managing Transactions示例扩展到两个数据源,但是@Transaction注释似乎仅对其中一个数据源有效。 在“Application.java
conEmu 有没有办法默认打开多个不同的选项卡? 我看到这个页面解释了如何使用 splits , 我意识到我可以按 Ctrl + T, 1, Enter,但我希望有一种方法可以自动执行此操作! "%
我正在寻找快速而肮脏的答案。我当时脑子一片空白,盯着屏幕看了 12 个小时以上,我想我中枪了。 我想做一个简单的 SignalR 应用程序作为教程。我找到了这个example ,但我不断收到票证未定义
我正在使用 Azure Powershell cmdlet 来启动/停止 VM。 Start-AzureVM [-ServiceName] [-Name] [ ] Stop-AzureVM [-S
我想使用Powershell脚本代码启动/停止iis和mssql 意味着当我运行ps脚本时,我想启动/停止iis和mssql 我在网上搜索了它,发现了一些代码,但按照我的要求无法正常工作 码: $ii
我在 liferay 工作。我们在我们的项目中使用一个模块来创建 liferay 主题。我使用命令 ant -Ddeploy.war=true 将它部署在服务器中。 war 文件在 liferay 部
我想在已安装 Python 2.7 的 Windows XP 计算机上运行 IPython(版本 0.12)。 我通过 Windows 二进制安装程序安装,但安装后 IPython 没有显示在菜单中,
我从创建了自己的简单图片。 FROM python:2.7.11 RUN mkdir /extra/later/ \ && mkdir /yyy 现在,我可以执行以下步骤: docker run
$(document).ready(function () { setTimeout(function() { window.location.reload(); }, 2000); // 2
我刚刚创建了一个帐户 OpenWeatherMap 我想通过城市 ID API 调用获取当前位置的天气: http://api.openweathermap.org/data/2.5/weather?
我注意到,如果我更改 xcasset 中的图像,启动 Storyboard不会更新。 例如,假设您的启动 Storyboard中有一个 UIImage View ,其中包含一个名为“logo”的蓝色图
我是一名优秀的程序员,十分优秀!