gpt4 book ai didi

java - 找不到 spark 应用程序输出

转载 作者:塔克拉玛干 更新时间:2023-11-03 01:23:31 25 4
gpt4 key购买 nike

我有一个可以成功启动的集群,至少这是我在其中看到此信息的 Web UI 上显示的内容

URL: spark://Name25:7077
REST URL: spark://Name25:6066 (cluster mode)
Alive Workers: 10
Cores in use: 192 Total, 0 Used
Memory in use: 364.0 GB Total, 0.0 B Used
Applications: 0 Running, 5 Completed
Drivers: 0 Running, 5 Completed
Status: ALIVE

如果我以这种方式使用它,我使用提交命令来运行我的应用程序

./bin/spark-submit --class myapp.Main --master spark://Name25:7077 --deploy-mode cluster /home/lookupjar/myapp-0.0.1-SNAPSHOT.jar /home/etud500.csv  /home/

我收到这条消息:

Running Spark using the REST application submission protocol. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/08/31 15:55:16 INFO RestSubmissionClient: Submitting a request to launch an application in spark://Name25:7077. 16/08/31 15:55:27 WARN RestSubmissionClient: Unable to connect to server spark://Name25:7077. Warning: Master endpoint spark://Name25:7077 was not a REST server. Falling back to legacy submission gateway instead. 16/08/31 15:55:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

如果我以这种方式使用它:

./bin/spark-submit --class myapp.Main --master spark://Name25:6066 --deploy-mode cluster /home/lookupjar/myapp-0.0.1-SNAPSHOT.jar /home//etud500.csv  /home/result

我收到这条消息

Running Spark using the REST application submission protocol. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/08/31 16:59:06 INFO RestSubmissionClient: Submitting a request to launch an application in spark://Name25:6066. 16/08/31 16:59:06 INFO RestSubmissionClient: Submission successfully created as driver-20160831165906-0004. Polling submission state... 16/08/31 16:59:06 INFO RestSubmissionClient: Submitting a request for the status of submission driver-20160831165906-0004 in spark://Name25:6066. 16/08/31 16:59:06 INFO RestSubmissionClient: State of driver driver-20160831165906-0004 is now RUNNING. 16/08/31 16:59:06 INFO RestSubmissionClient: Driver is running on worker worker-20160831143117-10.0.10.48-38917 at 10.0.10.48:38917. 16/08/31 16:59:06 INFO RestSubmissionClient: Server responded with CreateSubmissionResponse: { "action" : "CreateSubmissionResponse", "message" : "Driver successfully submitted as driver-20160831165906-0004", "serverSparkVersion" : "2.0.0", "submissionId" : "driver-20160831165906-0004", "success" : true }

我认为这是成功的,但我的应用程序应该有 3 个输出到给定路径 (/home/result),因为我在我的代码中使用了:

path =args [1];
rdd1.saveAsTextFile(path+"/rdd1");
rdd2.saveAsTextFile(path+"/rdd2");
rdd3.saveAsTextFile(path+"/rdd3");

问题 1:为什么它要求我使用“spark://Name25:6066”而不是“spark://Name25:7077”?因为根据 spark 网站我们使用:7077

问题2:如果显示提交成功并完成申请,为什么我找不到3个输出文件夹?

最佳答案

使用 6066 提交并不表示您的工作已成功完成。它只是发送请求,作业在后台运行。您必须检查 spark UI 以了解作业完成状态。

如果作业已完成并且您的作业生成了输出文件,您可以使用以下方法检查您的文件:

hadoop dfs -ls <path>/rdd1

关于java - 找不到 spark 应用程序输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39254665/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com