- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在集群上使用 yarn 客户端模式启动 Spark 作业。我已经用 yarn 尝试过 spark-shell,我可以启动该应用程序。但是,我还希望能够从 Eclipse 运行驱动程序,同时使用集群运行任务。我还向 HDFS 添加了 spark-assembly jar,并通过将 (HADOOP_CONF_DIR env variable) 添加到 eclipse 来指向它,尽管我不确定这是否是解决此问题的最佳方法。
我的应用程序确实在集群上启动(正如我在资源管理器的监视器中看到的那样)它“成功”完成,但没有任何结果返回给驱动程序。我在 Eclipse 控制台中看到以下异常:
WARN 10:11:08,375 Logging.scala:71 -- Lost task 0.0 in stage 1.0 (TID 1, vanpghdcn2.pgdev.sap.corp): java.lang.NullPointerException
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1.apply(ExistingRDD.scala:56)
at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1.apply(ExistingRDD.scala:55)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR 10:11:08,522 Logging.scala:75 -- Task 0 in stage 1.0 failed 4 times; aborting job
INFO 10:11:08,538 SparkUtils.scala:67 -- SparkContext stopped
2015-10-22 10:10:42,603 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1445462013958_0012_000001 container=Container: [ContainerId: container_1445462013958_0012_01_000001, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:42,603 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1
2015-10-22 10:10:42,603 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.041666668 absoluteUsedCapacity=0.041666668 used=<memory:1024, vCores:1> cluster=<memory:24576, vCores:24>
2015-10-22 10:10:42,604 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : vanpghdcn3.pgdev.sap.corp:41419 for container : container_1445462013958_0012_01_000001
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1445462013958_0012 AttemptId: appattempt_1445462013958_0012_000001 MasterContainer: Container: [ContainerId: container_1445462013958_0012_01_000001, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ]
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from SCHEDULED to ALLOCATED_SAVING
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from ALLOCATED_SAVING to ALLOCATED
2015-10-22 10:10:42,606 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1445462013958_0012_01_000001, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ] for AM appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1445462013958_0012_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,'-Dspark.eventLog.dir=','-Dspark.driver.port=57819','-Dspark.app.name=Sparca Application','-Dspark.executor.memory=1g','-Dspark.master=yarn-client','-Dspark.executor.id=driver','-Dspark.externalBlockStore.folderName=spark-10391661-8d35-40d9-8242-fe79bdc19d2d','-Dspark.fileserver.uri=<a href="http://10.161.43.118:57820','-Dspark.driver.appUIAddress=http://10.161.43.118:4040','-Dspark.driver.host=10.161.43.118','-Dspark.eventLog.enabled=false','-Dspark.yarn.jar=hdfs://vanpghdcn1.pgdev.sap.corp:8020/data/spark-assembly-1.4.0-hadoop2.6.0.jar','-Dspark.cores.max=6',-Dspark.yarn.app.container.log.dir=">http://10.161.43.118:57820','-Dspark.driver.appUIAddress=http://10.161.43.118:4040','-Dspark.driver.host=10.161.43.118','-Dspark.eventLog.enabled=false','-Dspark.yarn.jar=hdfs://vanpghdcn1.pgdev.sap.corp:8020/data/spark-assembly-1.4.0-hadoop2.6.0.jar','-Dspark.cores.max=6',-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'10.161.43.118:57819',--executor-memory,1024m,--executor-cores,1,--num-executors ,2,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2015-10-22 10:10:42,608 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,608 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,640 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1445462013958_0012_01_000001, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ] for AM appattempt_1445462013958_0012_000001
2015-10-22 10:10:42,640 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from ALLOCATED to LAUNCHED
2015-10-22 10:10:43,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000001 Container Transitioned from ACQUIRED to RUNNING
2015-10-22 10:10:48,176 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1445462013958_0012_000001 (auth:SIMPLE)
2015-10-22 10:10:48,188 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1445462013958_0012_000001
2015-10-22 10:10:48,188 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs IP=10.165.28.145 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=application_1445462013958_0012 APPATTEMPTID=appattempt_1445462013958_0012_000001
2015-10-22 10:10:48,188 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from LAUNCHED to RUNNING
2015-10-22 10:10:48,188 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1445462013958_0012 State change from ACCEPTED to RUNNING
2015-10-22 10:10:48,632 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000002 Container Transitioned from NEW to ALLOCATED
2015-10-22 10:10:48,632 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1445462013958_0012 CONTAINERID=container_1445462013958_0012_01_000002
2015-10-22 10:10:48,632 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1445462013958_0012_01_000002 of capacity <memory:2048, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which has 2 containers, <memory:3072, vCores:2> used and <memory:5120, vCores:6> available after allocation
2015-10-22 10:10:48,632 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1445462013958_0012_000001 container=Container: [ContainerId: container_1445462013958_0012_01_000002, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1 clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:48,633 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2
2015-10-22 10:10:48,633 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:3072, vCores:2> cluster=<memory:24576, vCores:24>
2015-10-22 10:10:48,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000003 Container Transitioned from NEW to ALLOCATED
2015-10-22 10:10:48,819 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1445462013958_0012 CONTAINERID=container_1445462013958_0012_01_000003
2015-10-22 10:10:48,819 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1445462013958_0012_01_000003 of capacity <memory:2048, vCores:1> on host vanpghdcn2.pgdev.sap.corp:36064, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation
2015-10-22 10:10:48,819 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1445462013958_0012_000001 container=Container: [ContainerId: container_1445462013958_0012_01_000003, NodeId: vanpghdcn2.pgdev.sap.corp:36064, NodeHttpAddress: vanpghdcn2.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2 clusterResource=<memory:24576, vCores:24>
2015-10-22 10:10:48,819 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:5120, vCores:3>, usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333, numApps=1, numContainers=3
2015-10-22 10:10:48,820 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.20833333 absoluteUsedCapacity=0.20833333 used=<memory:5120, vCores:3> cluster=<memory:24576, vCores:24>
2015-10-22 10:10:53,253 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : vanpghdcn3.pgdev.sap.corp:41419 for container : container_1445462013958_0012_01_000002
2015-10-22 10:10:53,255 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
2015-10-22 10:10:53,256 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : vanpghdcn2.pgdev.sap.corp:36064 for container : container_1445462013958_0012_01_000003
2015-10-22 10:10:53,257 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000003 Container Transitioned from ALLOCATED to ACQUIRED
2015-10-22 10:10:53,643 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000002 Container Transitioned from ACQUIRED to RUNNING
2015-10-22 10:10:53,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000003 Container Transitioned from ACQUIRED to RUNNING
2015-10-22 10:10:58,282 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1445462013958_0012
2015-10-22 10:11:08,349 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1445462013958_0012_000001 with final state: FINISHING, and exit status: -1000
2015-10-22 10:11:08,349 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from RUNNING to FINAL_SAVING
2015-10-22 10:11:08,349 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1445462013958_0012 with final state: FINISHING
2015-10-22 10:11:08,349 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1445462013958_0012 State change from RUNNING to FINAL_SAVING
2015-10-22 10:11:08,350 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1445462013958_0012
2015-10-22 10:11:08,350 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from FINAL_SAVING to FINISHING
2015-10-22 10:11:08,350 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1445462013958_0012 State change from FINAL_SAVING to FINISHING
2015-10-22 10:11:08,453 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1445462013958_0012 unregistered successfully.
2015-10-22 10:11:08,692 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000002 Container Transitioned from RUNNING to COMPLETED
2015-10-22 10:11:08,692 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1445462013958_0012_01_000002 in state: COMPLETED event:FINISHED
2015-10-22 10:11:08,692 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1445462013958_0012 CONTAINERID=container_1445462013958_0012_01_000002
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1445462013958_0012_01_000002 of capacity <memory:2048, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which currently has 1 containers, <memory:1024, vCores:1> used and <memory:7168, vCores:7> available, release resources=true
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:3072, vCores:2> numContainers=2 user=hdfs user-resources=<memory:3072, vCores:2>
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1445462013958_0012_01_000002, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority: 1, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2 cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used=<memory:3072, vCores:2> cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3072, vCores:2>, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=2
2015-10-22 10:11:08,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1445462013958_0012_000001 released container container_1445462013958_0012_01_000002 on node: host: vanpghdcn3.pgdev.sap.corp:41419 #containers=1 available=<memory:7168, vCores:7> used=<memory:1024, vCores:1> with event: FINISHED
2015-10-22 10:11:08,704 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8050: readAndProcess from client 10.161.43.118 threw exception [java.io.IOException: Connection reset by peer]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.hadoop.ipc.Server.channelRead(Server.java:2603)
at org.apache.hadoop.ipc.Server.access$2800(Server.java:136)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1481)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608)
2015-10-22 10:11:08,920 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000003 Container Transitioned from RUNNING to COMPLETED
2015-10-22 10:11:08,920 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1445462013958_0012_01_000003 in state: COMPLETED event:FINISHED
2015-10-22 10:11:08,920 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1445462013958_0012 CONTAINERID=container_1445462013958_0012_01_000003
2015-10-22 10:11:08,920 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1445462013958_0012_01_000003 of capacity <memory:2048, vCores:1> on host vanpghdcn2.pgdev.sap.corp:36064, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-10-22 10:11:08,920 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1024, vCores:1> numContainers=1 user=hdfs user-resources=<memory:1024, vCores:1>
2015-10-22 10:11:08,921 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1445462013958_0012_01_000003, NodeId: vanpghdcn2.pgdev.sap.corp:36064, NodeHttpAddress: vanpghdcn2.pgdev.sap.corp:8042, Resource: <memory:2048, vCores:1>, Priority: 1, Token: Token { kind: ContainerToken, service: 10.165.28.143:36064 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1 cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,921 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.041666668 absoluteUsedCapacity=0.041666668 used=<memory:1024, vCores:1> cluster=<memory:24576, vCores:24>
2015-10-22 10:11:08,921 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.041666668, absoluteUsedCapacity=0.041666668, numApps=1, numContainers=1
2015-10-22 10:11:08,921 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1445462013958_0012_000001 released container container_1445462013958_0012_01_000003 on node: host: vanpghdcn2.pgdev.sap.corp:36064 #containers=0 available=<memory:8192, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2015-10-22 10:11:09,694 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1445462013958_0012_01_000001 Container Transitioned from RUNNING to COMPLETED
2015-10-22 10:11:09,694 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1445462013958_0012_000001
2015-10-22 10:11:09,694 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1445462013958_0012_01_000001 in state: COMPLETED event:FINISHED
2015-10-22 10:11:09,694 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1445462013958_0012 CONTAINERID=container_1445462013958_0012_01_000001
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1445462013958_0012_01_000001 of capacity <memory:1024, vCores:1> on host vanpghdcn3.pgdev.sap.corp:41419, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=hdfs user-resources=<memory:0, vCores:0>
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1445462013958_0012_01_000001, NodeId: vanpghdcn3.pgdev.sap.corp:41419, NodeHttpAddress: vanpghdcn3.pgdev.sap.corp:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 10.165.28.145:41419 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:24576, vCores:24>
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:24576, vCores:24>
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2015-10-22 10:11:09,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1445462013958_0012_000001 released container container_1445462013958_0012_01_000001 on node: host: vanpghdcn3.pgdev.sap.corp:41419 #containers=0 available=<memory:8192, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2015-10-22 10:11:09,694 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1445462013958_0012_000001
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1445462013958_0012_000001 State change from FINISHING to FINISHED
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1445462013958_0012 State change from FINISHING to FINISHED
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1445462013958_0012_000001 is done. finalState=FINISHED
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1445462013958_0012
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1445462013958_0012 requests cleared
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1445462013958_0012 user: hdfs queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1445462013958_0012 user: hdfs leaf-queue of parent: root #applications: 0
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1445462013958_0012,name=Sparca Application,user=hdfs,queue=default,state=FINISHED,trackingUrl=http://vanpghdcn1:8088/proxy/application_1445462013958_0012/,appMasterHost=10.165.28.145,startTime=1445533842429,finishTime=1445533868349,finalStatus=SUCCEEDED,memorySeconds=109990,vcoreSeconds=67,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK
2015-10-22 10:11:09,696 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1445462013958_0012_000001
2015-10-22 10:11:10,719 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2015-10-22 10:11:10,925 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
最佳答案
你的文件有多大?
这表示,某种客户端网络问题暂时导致主机和端点之间的通信问题。
如果文件大小足够大,那么通过 TCP channel 传输数据需要很长时间,并且连接不能长时间保持事件状态并被重置。希望,这个答案有帮助。
关于apache-spark - 在 Yarn 上启动 Spark 时在资源管理器上抛出 "java.io.IOException: Connection reset by peer",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33287813/
SQLite、Content provider 和 Shared Preference 之间的所有已知区别。 但我想知道什么时候需要根据情况使用 SQLite 或 Content Provider 或
警告:我正在使用一个我无法完全控制的后端,所以我正在努力解决 Backbone 中的一些注意事项,这些注意事项可能在其他地方更好地解决......不幸的是,我别无选择,只能在这里处理它们! 所以,我的
我一整天都在挣扎。我的预输入搜索表达式与远程 json 数据完美配合。但是当我尝试使用相同的 json 数据作为预取数据时,建议为空。点击第一个标志后,我收到预定义消息“无法找到任何内容...”,结果
我正在制作一个模拟 NHL 选秀彩票的程序,其中屏幕右侧应该有一个 JTextField,并且在左侧绘制弹跳的选秀球。我创建了一个名为 Ball 的类,它实现了 Runnable,并在我的主 Draf
这个问题已经有答案了: How can I calculate a time span in Java and format the output? (18 个回答) 已关闭 9 年前。 这是我的代码
我有一个 ASP.NET Web API 应用程序在我的本地 IIS 实例上运行。 Web 应用程序配置有 CORS。我调用的 Web API 方法类似于: [POST("/API/{foo}/{ba
我将用户输入的时间和日期作为: DatePicker dp = (DatePicker) findViewById(R.id.datePicker); TimePicker tp = (TimePic
放宽“邻居”的标准是否足够,或者是否有其他标准行动可以采取? 最佳答案 如果所有相邻解决方案都是 Tabu,则听起来您的 Tabu 列表的大小太长或您的释放策略太严格。一个好的 Tabu 列表长度是
我正在阅读来自 cppreference 的代码示例: #include #include #include #include template void print_queue(T& q)
我快疯了,我试图理解工具提示的行为,但没有成功。 1. 第一个问题是当我尝试通过插件(按钮 1)在点击事件中使用它时 -> 如果您转到 Fiddle,您会在“内容”内看到该函数' 每次点击都会调用该属
我在功能组件中有以下代码: const [ folder, setFolder ] = useState([]); const folderData = useContext(FolderContex
我在使用预签名网址和 AFNetworking 3.0 从 S3 获取图像时遇到问题。我可以使用 NSMutableURLRequest 和 NSURLSession 获取图像,但是当我使用 AFHT
我正在使用 Oracle ojdbc 12 和 Java 8 处理 Oracle UCP 管理器的问题。当 UCP 池启动失败时,我希望关闭它创建的连接。 当池初始化期间遇到 ORA-02391:超过
关闭。此题需要details or clarity 。目前不接受答案。 想要改进这个问题吗?通过 editing this post 添加详细信息并澄清问题. 已关闭 9 年前。 Improve
引用这个plunker: https://plnkr.co/edit/GWsbdDWVvBYNMqyxzlLY?p=preview 我在 styles.css 文件和 src/app.ts 文件中指定
为什么我的条形这么细?我尝试将宽度设置为 1,它们变得非常厚。我不知道还能尝试什么。默认厚度为 0.8,这是应该的样子吗? import matplotlib.pyplot as plt import
当我编写时,查询按预期执行: SELECT id, day2.count - day1.count AS diff FROM day1 NATURAL JOIN day2; 但我真正想要的是右连接。当
我有以下时间数据: 0 08/01/16 13:07:46,335437 1 18/02/16 08:40:40,565575 2 14/01/16 22:2
一些背景知识 -我的 NodeJS 服务器在端口 3001 上运行,我的 React 应用程序在端口 3000 上运行。我在 React 应用程序 package.json 中设置了一个代理来代理对端
我面临着一个愚蠢的问题。我试图在我的 Angular 应用程序中延迟加载我的图像,我已经尝试过这个2: 但是他们都设置了 src attr 而不是 data-src,我在这里遗漏了什么吗?保留 d
我是一名优秀的程序员,十分优秀!