- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
master yarn-cluster
deployMode cluster
executorMemory 1500m
executorCores 1
totalExecutorCores null
propertiesFile /usr/lib/spark/conf/spark-defaults.conf
driverMemory 1500m
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
supervise false
queue null
numExecutors null
files null
pyFiles null
archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
mainClass SparkBWA
primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
name SparkBWA
childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589]
jars null
packages null
packagesExclusions null
repositories null
verbose true
Spark properties used, including those specified through
--conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 5586m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share /google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.sql.parquet.cacheMetadata -> false
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-client
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 2
Main class: org.apache.spark.deploy.yarn.Client
Arguments:
--name SparkBWA
--driver-memory 1500m
--executor-memory 1500m
--executor-cores 1
--archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
--jar file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
--class SparkBWA
-algorithm mem
-reads paired
-index /Data/HumanBase/hg38
-partitions 32
ERR000589_1.filt.fastq
ERR000589_2.filt.fastqhb
Output_ERR000589
System properties:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
SPARK_SUBMIT -> true
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.submit.deployMode -> cluster
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
spark.yarn.am.memory is set but does not apply in cluster mode.
spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdetector-m/10.132.0.2:8032 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1467990031555_0106
Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0106 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
atorg.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7 31)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/SparkBWA.jar is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:40,419 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip
2016-07-31 01:12:40,445 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,446 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,448 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:40,495 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip
2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,509 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:44,720 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_1.inprogress
2016-07-31 01:12:44,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_1.inprogress for DFSClient_NONMAPREDUCE_-1111833453_14
2016-07-31 01:12:45,373 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:45,375 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:45,379 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_1.inprogress is closed by DFSClient_NONMAPREDUCE_-1111833453_14
2016-07-31 01:12:45,843 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b7989393-f278-477c-8e83-ff5da9079e8a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:12:49,914 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_2.inprogress
2016-07-31 01:12:50,100 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_2.inprogress for DFSClient_NONMAPREDUCE_378341726_14
2016-07-31 01:12:50,737 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:50,738 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:50,742 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_2.inprogress is closed by DFSClient_NONMAPREDUCE_378341726_14
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742335_1511 10.132.0.3:50010 10.132.0.4:50010
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742337_1513 10.132.0.3:50010 10.132.0.4:50010
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742336_1512 10.132.0.3:50010 10.132.0.4:50010
2016-07-31 01:12:51,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.3:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
2016-07-31 01:12:54,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.4:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
2016-07-31 01:12:55,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.46380a1f-b5fd-4924-96aa-f59dcae0cbec is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:05,882 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 244 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 234 SyncTimes(ms): 221
2016-07-31 01:13:05,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7273ee28-eb1c-4fe2-98d2-c5a20ebe4ffa is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:15,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f640743-d06c-4583-ac95-9d520dc8f301 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:25,902 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.bc63864c-0267-47b5-bcc1-96ba81d6c9a5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:35,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93557793-2ba2-47e8-b54c-234c861b6e6c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:45,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0fdf083c-3c53-4051-af16-d579f700962e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:55,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.834632f1-d9c6-4e14-9354-72f8c18f66d0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:05,933 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 262 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 252 SyncTimes(ms): 236
2016-07-31 01:14:05,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d06ef3b4-873f-464d-9cd0-e360da48e194 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:15,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.32ccba74-5f6c-45fc-b5db-26efb1b840e2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:25,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fef919cd-9952-4af8-a49a-e6dd2aa032f1 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:35,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.77ffdf36-8e42-43d8-9c1f-df6f3d11700d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:45,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c31cfcbb-b47c-4169-ab0f-7ae87d4f815d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:55,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6429570d-fb0a-4117-bb12-127a67e0a0b7 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:05,981 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 280 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 270 SyncTimes(ms): 253
2016-07-31 01:15:05,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8030b18d-05f2-4520-b5c4-2fe42338b92b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:15,991 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f608a0f4-e730-43cd-a19d-da57caac346e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:25,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9d5a1f80-2f2a-43a7-84f1-b26a8c90a98f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:36,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.279e96fc-180c-47a5-a3ba-cfda581eedad is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:46,015 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a85bbf52-61f4-4899-98b1-23615a549774 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:56,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.80613e8e-7015-4aeb-81df-49884bd0eb5e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:06,028 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 298 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 288 SyncTimes(ms): 267
2016-07-31 01:16:06,031 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2be7fc48-bd1c-4042-88e4-239b1c630458 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:16,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.40fc68a6-f003-4e35-b4b3-50bd3c4a0c82 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:26,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.97e7d15c-4d28-4089-b4a5-9f0935a72589 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:36,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.84d8e78d-90fd-419f-9000-fa04ab56955e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:46,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6691cc3e-6969-4a8f-938f-272d1c96701d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:56,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.077143b6-281a-468c-8b2c-bcb6cd3bc27a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:06,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 316 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 306 SyncTimes(ms): 284
2016-07-31 01:17:06,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.817d1886-aea2-450a-a586-08677dc18d60 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:16,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.abd46886-1359-4c5e-8276-ea4f2969411f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:26,087 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.24625260-59be-4a9b-b47b-b8d5b76cb789 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:36,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.11630782-e50e-4260-a0da-99845bc3f1db is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:46,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.16cdd027-f1b8-4cbf-a30c-2f1712f4abb5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:56,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93fb2e86-2fec-4069-b73b-632750fda603 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:06,116 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 334 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 324 SyncTimes(ms): 300
2016-07-31 01:18:06,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b19fddda-ea90-49ab-b44d-434cce28cb67 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:16,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d81ab189-bde5-4878-b82b-903983466f86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:26,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.e5b51632-f714-4814-b896-59bba137b42d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:36,144 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.39791121-9399-4a22-a50c-90eaddf31ffb is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:46,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.861c269b-5466-4855-84fd-587ed3306012 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:56,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8a9ff721-bd56-4bea-b399-31bfaabe8c7c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:06,168 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 352 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 342 SyncTimes(ms): 313
2016-07-31 01:19:06,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.492bf987-4991-4533-80e2-678efa843cb9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:16,178 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9294c0c6-43db-4f6d-9d31-f493143b6baf is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:26,187 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.341dd131-c14c-4147-bcbc-849d1d6bba8c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:36,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.56f92e8e-ef93-4279-a57f-472dd5d8f399 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:46,204 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5ddcda82-b501-4043-bb54-a29902d9d234 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:56,212 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.31e7517b-2ef3-458c-9979-324d7a96302f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:06,218 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 370 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 360 SyncTimes(ms): 329
2016-07-31 01:20:06,220 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5251f5df-0957-4008-b664-8d82eaa9789e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:16,229 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.3320b948-2478-4807-9ab3-d23e4945765e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:26,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0928c940-e57d-4a34-a7dc-53dade7ff909 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:36,246 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6240fcdf-696e-49c4-a883-3eda5ab89b4d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:46,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5622850e-b7b0-458a-9ffa-89e134fa3fda is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:56,262 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.faa076e8-490c-489f-8183-778325e0b144 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
最佳答案
首先,您需要找出哪个主机/节点被选为 ApplicationMaster 的主机。转到 YARN UI 并查找 Spark 应用程序。
有了节点后,转到磁盘上的日志,如 logs/userlogs/application_1469891809555_0005/container_1469891809555_0005_01_000001/stderr
.您需要找到stderr
用于容器 000001
这是 ApplicationMaster
的容器对于 Spark 应用程序。
关于apache-spark - 如何解决 "Exception in thread "main"org.apache.spark.SparkException : Application application finished with failed status"?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38678151/
Schema.org、Goodrelations-vocabulary.org 和 Productontology.org 之间有什么关系? Schema.org 告知,“W3C schema.org
大家好,我想知道包 org.ietf、org.omg、org.w3c 和 org 是如何实现的.xml 已进入 "official" Java classes ? 例如,默认 JDK 不会包含 Apa
首先,我试图用来自 Schema.org 的属性定义数据库表,例如,例如,我有一个名为“JobPosting”的表,它或多或少具有与 http://schema.org/JobPosting 中定义的
我有一个 org.w3c.dom.Document 被 org.dom4j.io.DOMReader 解析。 我想通过 org.w3c.dom.Element 搜索 dom4j DOM 文档。 比方说
我正在将我的应用程序部署到 Tomcat 6.0.20。 应用程序使用 Hibernate 作为 Web 层的 ORM、Spring 和 JSF。 我还从 main() 方法制作了简单的运行器来测试
我有一个使用 hibernate > 4 的 gradle 项目。如果我在 Apache tomcat 中运行我的 war 文件,我不会收到任何错误。但是当我在 Wildfly 8.2 中部署它时,出
我正在尝试将 JaCoCo 添加到我的 Android 以覆盖 Sonar Qube。但是在运行命令 ./gradlew jacocoTestReport 时,我收到以下错误。 Task :app:
如何在 emacs 组织模式中格式化日期? 例如,在下表中,我希望日期显示为“Aug 29”或“Wed, Aug 29”而不是“” #+ATTR_HTML: border="2" rules="all
我想使用 org 模式来写一本技术书籍。我正在寻找一种将外部文件中的现有代码插入到 babel 代码块中的方法,该代码块在导出为 pdf 时会提供很好的格式。 例如 #+BEGIN_SRC pytho
用作引用:https://support.google.com/webmasters/answer/146750?hl=en 您会注意到在“产品”下有一个属性类别,此外页面下方还有一个示例: Too
我读了这个Google doc .它说我们不使用列表中的产品。 那么对于产品列表(具有多页的类似产品的类别,如“鞋子”),推荐使用哪种模式? 我用这个: { "@context": "htt
我目前在做DBpedia数据集,想通过wikidata实现schema.org和DBpedia的映射。因此我想知道 schema.org 和 wikidata 之间是否存在任何映射。 最佳答案 我认为
我爱org-tables ,我用它们来记录各种事情。我现在正在为 Nix 记录一些单行代码(在阅读了 Domen Kožar 的 excellent guide 后,在 this year's Eur
如果看一下 Movie在 schema.org 中输入,actor 和 actors 属性都是允许的(actor 取代 actors)。但是 author 和 contributor 属性没有等效项。
我们有一些餐厅有多个地点或分支机构。我想包含正确的 Schema.org 标记,但找不到任何允许列出多个餐厅的内容。 每家餐厅都有自己的地址、电子邮件、电话和营业时间,甚至可能是“分店名称”。 两个分
我在一个页面中有多个综合评分片段。 有没有办法让其中之一成为默认值?将显示在搜索引擎结果中的那个? 谢谢大家! 更新:该网页本质上是品牌的页面。它包含品牌评论的总评分及其产品列表(每个产品的总评分)。
我提到了一些相关的职位,但并没有解决我的问题。因为我正在使用maven-jar-plugin-2.4 jar。 我正在使用JBoss Developer Studio 7.1.1 GA IDE,并且正
网站的根页面(即 http://example.com/ )的特殊之处在于它是默认的着陆页。它可能包含许多不同的对象类型。 它可能被认为是一个网站,或者一个博客等... 但它是否也应该被标记为给定对象
我想将一些文本放入一个 org 文件中,当我将内容导出到其中一种目标类型(在本例中为 HTML)时,该文件不会发布。有什么方法可以实现这个目标吗? 最佳答案 您可能想要使用 :noexport: 标签
org-mode 是否有一个键绑定(bind)可以在编号/项目符号列表项之间移动,就像您可以对标题一样? 喜欢的功能: org-forward-heading-same-level 大纲下一个可见标题
我是一名优秀的程序员,十分优秀!