gpt4 book ai didi

apache-spark - 如何将 javaagent 传递给 emr spark 应用程序?

转载 作者:行者123 更新时间:2023-12-04 10:55:28 32 4
gpt4 key购买 nike

我正在尝试使用 uber jvm profiler分析我的 spark 应用程序(spark 2.4,在 emr 5.21 上运行)

以下是我的集群配置

          [
{
"classification": "spark-defaults",
"properties": {
"spark.executor.memory": "38300M",
"spark.driver.memory": "38300M",
"spark.yarn.scheduler.reporterThread.maxFailures": "5",
"spark.driver.cores": "5",
"spark.yarn.driver.memoryOverhead": "4255M",
"spark.executor.heartbeatInterval": "60s",
"spark.rdd.compress": "true",
"spark.network.timeout": "800s",
"spark.executor.cores": "5",
"spark.memory.storageFraction": "0.27",
"spark.speculation": "true",
"spark.sql.shuffle.partitions": "200",
"spark.shuffle.spill.compress": "true",
"spark.shuffle.compress": "true",
"spark.storage.level": "MEMORY_AND_DISK_SER",
"spark.default.parallelism": "200",
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.memory.fraction": "0.80",
"spark.executor.extraJavaOptions": "-XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'",
"spark.executor.instances": "107",
"spark.yarn.executor.memoryOverhead": "4255M",
"spark.dynamicAllocation.enabled": "false",
"spark.driver.extraJavaOptions": "-XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'"
},
"configurations": []
},
{
"classification": "yarn-site",
"properties": {
"yarn.log-aggregation-enable": "true",
"yarn.nodemanager.pmem-check-enabled": "false",
"yarn.nodemanager.vmem-check-enabled": "false"
},
"configurations": []
},
{
"classification": "spark",
"properties": {
"maximizeResourceAllocation": "true",
"spark.sql.broadcastTimeout": "-1"
},
"configurations": []
},
{
"classification": "emrfs-site",
"properties": {
"fs.s3.threadpool.size": "50",
"fs.s3.maxConnections": "5000"
},
"configurations": []
},
{
"classification": "core-site",
"properties": {
"fs.s3.threadpool.size": "50",
"fs.s3.maxConnections": "5000"
},
"configurations": []
}

]

分析器 jar 存储在 s3 ( mybucket/profilers/jvm-profiler-1.0.0.jar ) 中。在引导我的核心节点和主节点时,我运行以下引导脚本
     sudo mkdir -p /tmp
aws s3 cp s3://mybucket/profilers/jvm-profiler-1.0.0.jar /tmp/

我提交我的 emr 步骤如下
       spark-submit --deploy-mode cluster --master=yarn ......(other parameters).........
--conf spark.jars=/tmp/jvm-profiler-1.0.0.jar --conf spark.driver.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000 --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000

但是我无法在日志中看到与分析相关的输出(检查了所有容器的 stdout 和 stderr 日志)。参数是否被忽略?我错过了什么吗?我还有什么可以检查的,看看为什么这个参数被忽略了?

最佳答案

我没有使用过 Uber JVM Profiler,但我想在 spark-submit 中添加额外的 jars你应该使用 --jars 选项。在处理 EMR 时,您可以直接从 S3 存储桶添加它们。

此外,在引导时,您正在复制 jar jvm-profiler-1.0.0.jar进入 /tmp文件夹,但是当您设置 Java 选项时,您没有添加路径。尝试这个 :

 spark-submit --deploy-mode cluster \
--master=yarn \
--conf "spark.driver.extraJavaOptions=-javaagent:/tmp/jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000" \
--conf "spark.executor.extraJavaOptions=-javaagent:/tmp/jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.ConsoleOutputReporter,metricInterval=5000" \
--jars "/tmp/jvm-profiler-1.0.0.jar" \
--<other params>

关于apache-spark - 如何将 javaagent 传递给 emr spark 应用程序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59233394/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com