gpt4 book ai didi

hadoop - Spark 应用程序卡在 ACCEPTED 状态

转载 作者:可可西里 更新时间:2023-11-01 14:43:46 26 4
gpt4 key购买 nike

我在一台 Ubuntu 14.04 服务器上安装了 Cloudera 5.4 的新实例,并希望运行其中一个 spark 应用程序。

这是命令:

sudo -uhdfs spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode cluster --master yarn /opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/jars/spark-examples-1.3.0-cdh5.4.5-hadoop2.6.0-cdh5.4.5.jar

这是输出:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/jars/avro-tools-1.7.6-cdh5.4.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/08/29 12:07:56 INFO RMProxy: Connecting to ResourceManager at chd2.moneyball.guru/104.131.78.0:8032
15/08/29 12:07:56 INFO Client: Requesting a new application from cluster with 1 NodeManagers
15/08/29 12:07:56 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (1750 MB per container)
15/08/29 12:07:56 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/08/29 12:07:56 INFO Client: Setting up container launch context for our AM
15/08/29 12:07:56 INFO Client: Preparing resources for our AM container
15/08/29 12:07:57 INFO Client: Uploading resource file:/opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/jars/spark-examples-1.3.0-cdh5.4.5-hadoop2.6.0-cdh5.4.5.jar -> hdfs://chd2.moneyball.guru:8020/user/hdfs/.sparkStaging/application_1440861466017_0007/spark-examples-1.3.0-cdh5.4.5-hadoop2.6.0-cdh5.4.5.jar
15/08/29 12:07:57 INFO Client: Setting up the launch environment for our AM container
15/08/29 12:07:57 INFO SecurityManager: Changing view acls to: hdfs
15/08/29 12:07:57 INFO SecurityManager: Changing modify acls to: hdfs
15/08/29 12:07:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs); users with modify permissions: Set(hdfs)
15/08/29 12:07:57 INFO Client: Submitting application 7 to ResourceManager
15/08/29 12:07:57 INFO YarnClientImpl: Submitted application application_1440861466017_0007
15/08/29 12:07:58 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:07:58 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.hdfs
start time: 1440864477580
final status: UNDEFINED
tracking URL: http://chd2.moneyball.guru:8088/proxy/application_1440861466017_0007/
user: hdfs
15/08/29 12:07:59 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:00 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:01 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:02 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:03 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:04 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:05 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:06 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED)
15/08/29 12:08:07 INFO Client: Application report for application_1440861466017_0007 (state: ACCEPTED
.....

它将循环显示最后一行。你能帮忙吗?如果您还需要什么,请告诉我。

最佳答案

我增加了 yarn.nodemanager.resource.memory-mb。现在一切正常

关于hadoop - Spark 应用程序卡在 ACCEPTED 状态,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32288202/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com