gpt4 book ai didi

hadoop - MapReduce中的Java堆空间

转载 作者:行者123 更新时间:2023-12-02 21:14:27 25 4
gpt4 key购买 nike

我在具有32GB RAM的计算机上运行MapReduce作业,但是出现JAVA堆空间错误。我已将yarn.nodemanager.resource.memory-mb设置为32GB,希望我有足够的内存来运行任务,但我想没有。我应该如何配置MapReduce v2来解决此问题?

编辑:

16/08/30 19:00:49 INFO mapreduce.Job: Task Id : attempt_1472579604725_0003_m_000000_0, Status : FAILED
Error: Java heap space
16/08/30 19:00:55 INFO mapreduce.Job: Task Id : attempt_1472579604725_0003_m_000000_1, Status : FAILED
Error: Java heap space
16/08/30 19:01:00 INFO mapreduce.Job: Task Id : attempt_1472579604725_0003_m_000000_2, Status : FAILED
Error: Java heap space

[2] mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
<property> <name>mapreduce.jobhistory.done-dir</name> <value>/root/Programs/hadoop/logs/history/done</value> </property>
<property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/root/Programs/hadoop/logs/history/intermediate-done-dir</value> </property>
<property> <name>mapreduce.job.reduces</name> <value>2</value> </property>

<!-- property> <name>yarn.nodemanager.resource.memory-mb</name> <value>10240</value> </property>
<property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property -->

<!-- property><name>mapreduce.task.files.preserve.failedtasks</name><value>true</value></property>
<property><name>mapreduce.task.files.preserve.filepattern</name><value>*</value></property -->

[3] yarn-site.xml
<configuration>
<property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property>
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
<property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property>
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>s8:8025</value> </property>
<property> <name>yarn.resourcemanager.scheduler.address</name> <value>s8:8030</value> </property>
<property> <name>yarn.resourcemanager.address</name> <value>s8:8032</value> </property>
<property> <name>yarn.log.server.url</name> <value>http://s8:19888/jobhistory/logs/</value> </property>

<!-- job history -->
<property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property>
<property> <name>yarn.nodemanager.log.retain-seconds</name> <value>900000</value> </property>
<property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> </property>

<!-- proxy -->
<property><name>yarn.web-proxy.address</name><value>s8:9046</value></property>

<!-- to check the classpath in yarn, do yarn classpath -->
<!-- compress output data -->
<property><name>mapreduce.output.fileoutputformat.compress</name><value>false</value></property>
<property><name>mapred.output.fileoutputformat.compress.codec</name><value>org.apache.hadoop.io.compress.BZip2Codec</value></property>

<!-- Node configuration -->
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>33554432</value> </property>
</configuration>

最佳答案

参数yarn.nodemanager.resource.memory-mb告诉Yarn有多少可用资源(从注释重复)
如果要让mapreduce程序使用这些资源,则应设置以下参数。

mapreduce.map.memory.mb

mapreduce.map.java.opts

mapreduce.reduce.memory.mb

mapreduce.reduce.java.opts


只要确保将java.opts设置为比memory.md小10-20%。

关于hadoop - MapReduce中的Java堆空间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39233638/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com