gpt4 book ai didi

amazon-web-services - 通过 AWS [EMR] 提交 Spark 应用程序

转载 作者:行者123 更新时间:2023-12-03 09:09:46 25 4
gpt4 key购买 nike

您好,我对云计算非常陌生,所以我对(也许)这个愚蠢的问题表示歉意。我需要帮助才能知道我所做的实际上是在集群上计算还是只是在主服务器上计算(无用的东西)。

我能做什么:我可以使用 AWS 控制台设置一个由一定数量的节点组成的集群,并在所有节点上安装 Spark。我可以通过 SSH 连接到主节点。那么需要什么才能在集群上运行我的带有 Spark 代码的 jar。

我会做什么:我会调用 Spark-submit 来运行我的代码:

spark-submit --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments] 

我的疑问:

  1. 是否需要用 --master 指定 master 以及“spark://”大师的引用文献?我在哪里可以找到那个引用?我应该运行 sbin/start-master.sh 中的脚本来启动吗独立集群管理器还是已经设置?如果我运行代码上面我想象代码只能在主机上本地运行,对吗?

  2. 我可以仅将输入文件保留在主节点上吗?假设我想要 要计算一个巨大文本文件的字数,我可以只将其保留在 主盘?或者为了保持并行性,我需要一个 像HDFS这样的分布式内存?这个看不懂,我保留一下 如果适合的话,放在主节点磁盘上。

非常感谢您的回复。

更新1:我尝试在集群上运行 Pi 示例,但无法得到结果。

$ sudo spark-submit   --class org.apache.spark.examples.SparkPi   --master yarn   --deploy-mode cluster   /usr/lib/spark/examples/jars/spark-examples.jar   10

我希望得到一行打印的 Pi is around 3.14... 但我得到:

17/04/15 13:16:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/15 13:16:03 INFO RMProxy: Connecting to ResourceManager at ip-172-31-37-222.us-west-2.compute.internal/172.31.37.222:8032
17/04/15 13:16:03 INFO Client: Requesting a new application from cluster with 2 NodeManagers
17/04/15 13:16:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
17/04/15 13:16:03 INFO Client: Will allocate AM container, with 5120 MB memory including 465 MB overhead
17/04/15 13:16:03 INFO Client: Setting up container launch context for our AM
17/04/15 13:16:03 INFO Client: Setting up the launch environment for our AM container
17/04/15 13:16:03 INFO Client: Preparing resources for our AM container
17/04/15 13:16:06 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/04/15 13:16:10 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_libs__5838015067814081789.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_libs__5838015067814081789.zip
17/04/15 13:16:12 INFO Client: Uploading resource file:/usr/lib/spark/examples/jars/spark-examples.jar -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/spark-examples.jar
17/04/15 13:16:12 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_conf__1370316719712336297.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_conf__.zip
17/04/15 13:16:13 INFO SecurityManager: Changing view acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing view acls groups to:
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls groups to:
17/04/15 13:16:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()

17/04/15 13:16:13 INFO Client: Submitting application application_1492261407069_0007 to ResourceManager
17/04/15 13:16:13 INFO YarnClientImpl: Submitted application application_1492261407069_0007
17/04/15 13:16:14 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:14 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:15 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:24 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:25 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:25 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: UNDEFINED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:26 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:55 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:56 INFO Client: Application report for application_1492261407069_0007 (state: FINISHED)
17/04/15 13:16:56 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.33.215
ApplicationMaster RPC port: 0
queue: default
start time: 1492262173096
final status: SUCCEEDED
tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
user: root
17/04/15 13:16:56 INFO ShutdownHookManager: Shutdown hook called
17/04/15 13:16:56 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9

最佳答案

第一个疑问的答案:

我假设您想在 yarn 上运行 Spark 。您只需传递 --master yarn --deploy-mode cluster 即可,Spark 驱动程序在由集群上的 YARN 管理的应用程序主进程内运行

spark-submit --master yarn  --deploy-mode cluster \
--class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments]

Reference对于其他模式

当您在 --deploy-mode 集群上运行作业时,您在运行的计算机上看不到输出(如果您正在打印某些内容)。

原因:您正在集群模式下运行作业,因此 master 将在集群中的一个节点上运行,并且输出将在同一台计算机上发出。

要检查输出,您可以使用以下命令在应用程序日志中查看它。

yarn logs -applicationId application_id

第二个疑问的答案:

您可以将输入文件保存在任何地方(主节点/HDFS)。

并行度完全取决于加载数据时创建的 RDD/DataFrame 的分区数量。分区的数量取决于数据大小,但您可以在加载数据时通过传递参数来控制。

如果您从 master 加载数据:

 val rdd =   sc.textFile("/home/ubumtu/input.txt",[number of partitions])

rdd 将使用您传递的分区数创建。如果您没有传递多个分区,那么它将考虑在 Spark conf 中配置的 spark.default.parallelism

如果您从 HDFS 加载数据:

 val rdd =  sc.textFile("hdfs://namenode:8020/data/input.txt")

rdd 将使用与 HDFS 内的 block 数相同的分区数创建。

希望我的回答对您有帮助。

关于amazon-web-services - 通过 AWS [EMR] 提交 Spark 应用程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43424540/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com