gpt4 book ai didi

apache-spark - 你如何覆盖 Spark Java 堆大小?

转载 作者:行者123 更新时间:2023-12-03 20:22:40 25 4
gpt4 key购买 nike

我们在 Docker 容器中运行 Spark 驱动程序和执行程序,由 Kubernetes 编排。我们希望能够在运行时通过 Kubernetes Controller YAML 为它们设置 Java 堆大小。必须设置什么 Spark 配置才能做到这一点?如果我什么都不做,而是通过 ps -ef 查看启动的进程, 我懂了:

root       639   638  0 00:16 ?        00:00:23 /opt/ibm/java/jre/bin/java -cp /opt/ibm/spark/conf/:/opt/ibm/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/ibm/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/ibm/spark/lib/datanucleus-core-3.2.10.jar:/opt/ibm/spark/lib/datanucleus-rdbms-3.2.9.jar:/opt/ibm/hadoop/conf/ -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=172.17.48.29:2181,172.17.231.2:2181,172.17.47.17:2181 -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=172.17.48.29:2181,172.17.231.2:2181,172.17.47.17:2181 -Dcom.ibm.apm.spark.logfilename=master.log -Dspark.deploy.defaultCores=2 **-Xms1g -Xmx1g** org.apache.spark.deploy.master.Master --ip sparkmaster-1 --port 7077 --webui-port 18080

正在设置 -Xms 和 -Xmx 选项。我试过设置 SPARK_DAEMON_JAVA_OPTS="-XmsIG -Xms2G"spark-env.sh并得到:
root      2919  2917  2 19:16 ?        00:00:15 /opt/ibm/java/jre/bin/java -cp /opt/ibm/spark/conf/:/opt/ibm/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/ibm/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/ibm/spark/lib/datanucleus-core-3.2.10.jar:/opt/ibm/spark/lib/datanucleus-rdbms-3.2.9.jar:/opt/ibm/hadoop/conf/ -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=172.17.48.29:2181,172.17.231.2:2181,172.17.47.17:2181 **-Xms1G -Xmx2G** -Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=172.17.48.29:2181,172.17.231.2:2181,172.17.47.17:2181 **-Xms1G -Xmx2G** -Dcom.ibm.apm.spark.logfilename=master.log -Dspark.deploy.defaultCores=2 **-Xms1g -Xmx1g** org.apache.spark.deploy.master.Master --ip sparkmaster-1 --port 7077 --webui-port 18080

friend 建议设置

spark.driver.memory 2g

spark-defaults.conf ,但结果看起来像第一个示例。也许是 ps -ef 中的值命令被此设置覆盖,但我怎么知道?如 spark.driver.memory是正确的覆盖,您可以以这种方式设置堆最小值和最大值,还是仅设置最大值?

提前致谢。

最佳答案

设置 SPARK_DAEMON_MEMORY environment variableconf/spark-env.sh应该做的伎俩:

SPARK_DAEMON_MEMORY Memory to allocate to the Spark master and worker daemons themselves (default: 1g).

关于apache-spark - 你如何覆盖 Spark Java 堆大小?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39541944/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com