gpt4 book ai didi

apache-spark - 无法将 Spark 应用程序提交到集群,卡在 "UNDEFINED"

转载 作者:行者123 更新时间:2023-12-04 15:33:41 24 4
gpt4 key购买 nike

我用这个命令登顶 Spark 应用 yarn 簇

export YARN_CONF_DIR=conf
bin/spark-submit --class "Mining"
--master yarn-cluster
--executor-memory 512m ./target/scala-2.10/mining-assembly-0.1.jar

在 Web UI 中,它停留在 UNDEFINED
enter image description here

在控制台中,它停留在
<code>14/11/12 16:37:55 INFO yarn.Client: Application report from ASM: 
application identifier: application_1415704754709_0017
appId: 17
clientToAMToken: null
appDiagnostics:
appMasterHost: example.com
appQueue: default
appMasterRpcPort: 0
appStartTime: 1415784586000
yarnAppState: RUNNING
distributedFinalState: UNDEFINED
appTrackingUrl: http://example.com:8088/proxy/application_1415704754709_0017/
appUser: rain
</code>

更新:

潜入 Logs for container在 Web UI 中 http://example.com:8042/node/containerlogs/container_1415704754709_0017_01_000001/rain/stderr/?start=0 , 我找到了这个
14/11/12 02:11:47 WARN YarnClusterScheduler: Initial job has not accepted 
any resources; check your cluster UI to ensure that workers are registered
and have sufficient memory
14/11/12 02:11:47 DEBUG Client: IPC Client (1211012646) connection to
spark.mvs.vn/192.168.64.142:8030 from rain sending #24418
14/11/12 02:11:47 DEBUG Client: IPC Client (1211012646) connection to
spark.mvs.vn/192.168.64.142:8030 from rain got value #24418

我发现这个问题在这里有解决方案 http://hortonworks.com/hadoop-tutorial/using-apache-spark-hdp/
The Hadoop cluster must have sufficient memory for the request.

For example, submitting the following job with 1GB memory allocated for
executor and Spark driver fails with the above error in the HDP 2.1 Sandbox.
Reduce the memory asked for the executor and the Spark driver to 512m and
re-start the cluster.

我正在尝试这个解决方案,希望它能奏效。

最佳答案

解决方案

最后我发现it caused by memory problem
当我改变时它起作用了yarn.nodemanager.resource.memory-mb3072 (其值为 2048)在界面和重启集群的 Web UI 中。

enter image description here

我很高兴看到这个

enter image description here

有了 3GB 的 yarn 节点管理器,我的峰会是

bin/spark-submit
--class "Mining"
--master yarn-cluster
--executor-memory 512m
--driver-memory 512m
--num-executors 2
--executor-cores 1
./target/scala-2.10/mining-assembly-0.1.jar`

关于apache-spark - 无法将 Spark 应用程序提交到集群,卡在 "UNDEFINED",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26883701/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com