gpt4 book ai didi

apache-spark - 初始作业尚未接受任何资源

转载 作者:行者123 更新时间:2023-12-02 03:49:22 28 4
gpt4 key购买 nike

我的问题与其他报告“初始作业未接受任何资源”的发帖人类似。我阅读了他们的建议,但仍然无法从 Java 提交作业。我想知道有更多安装 Spark 经验的人是否发现明显的错误或知道如何解决此问题?

Spark : check your cluster UI to ensure that workers are registered .

我的配置如下:(虚拟机费多拉)MASTER:版本 2.0.2,使用 hadoop 预构建。WORKER:单个实例。

(主机/Windows Java 应用程序)客户端是一个示例 JavaApp,配置为

conf.set("spark.cores.max","1");
conf.set("spark.shuffle.service.enabled", "false");
conf.set("spark.dynamicAllocation.enabled", "false");

附件是 Spark UI 的快照。据我所知,我的工作已收到、提交并正在运行。看来我并没有过度利用 CPU 和 RAM。

enter image description here

Java(客户端)控制台报告

12:15:47.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:48.815 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:49.806 WARN Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
12:15:49.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:50.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0

Spark 工作人员日志报告。

16/11/22 12:16:34 INFO Worker: Asked to launch executor app-20161122121634-0012/0 for Simple 
Application
16/11/22 12:16:34 INFO SecurityManager: Changing modify acls groups to:
16/11/22 12:16:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls dis
abled; users with view permissions: Set(john); groups with view permissions: Set(); users
with modify permissions: Set(john); groups with modify permissions: Set()
16/11/22 12:16:34 INFO ExecutorRunner: Launch command: "/apps/jdk1.8.0_101/jre/bin/java" "-cp " "/apps/spark-2.0.2-bin-hadoop2.7/conf/:/apps/spark-2.0.2-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.port=29015" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@192.168.56.1:29015" "--executor-id" "0" "--hostname" "192.168.56.103" "--cores" "1" "--app-id" "app-20161122121634-0012" "--worker-url" "spark://Worker@192.168.56.103:38701"

enter image description here

最佳答案

您是否有防火墙阻止通信?正如我在另一个答案中所说:

Apache Spark on Mesos: Initial job has not accepted any resources :

While most of other answers focuses on resource allocation (cores, memory) on spark slaves, I would like to highlight that firewall could cause exactly the same issue, especially when you are running spark on cloud platforms.

If you can find spark slaves in the web UI, you have probably opened the standard ports 8080, 8081, 7077, 4040. Nonetheless, when you actually run a job, it uses SPARK_WORKER_PORT, spark.driver.port and spark.blockManager.port which by default are randomly assigned. If your firewall is blocking these ports, the master could not retrieve any job-specific response from slaves and return the error.

You can run a quick test by opening all the ports and see whether the slave accepts jobs.

关于apache-spark - 初始作业尚未接受任何资源,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40748204/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com