gpt4 book ai didi

java - Hadoop - MultipleOutputs.write - OutofMemory - Java 堆空间

转载 作者:可可西里 更新时间:2023-11-01 16:18:29 24 4
gpt4 key购买 nike

我正在编写一个处理许多文件并从每个文件创建多个文件的 hadoop 作业。我正在使用“MultipleOutputs”来编写它们。它适用于较少数量的文件,但我收到大量文件的以下错误。在 MultipleOutputs.write(key, value, outputPath) 上引发异常;我尝试增加 ulimit 和 -Xmx 但无济于事。

2013-01-15 13:44:05,154 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hdfs.DFSOutputStream$Packet.<init>(DFSOutputStream.java:201)
at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1423)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:161)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:136)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:125)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:116)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:90)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter. writeObject( TextOutputFormat.java:78)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter. write(TextOutputFormat.java:99)
**at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write( MultipleOutputs.java:386)
at com.demoapp.collector.MPReducer.reduce(MPReducer.java:298)
at com.demoapp.collector.MPReducer.reduce(MPReducer.java:28)**
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:595)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:433)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.Child.main(Child.java:262)

有什么想法吗?

最佳答案

如果它不适用于大量文件,可能是因为您已经达到了数据节点可以提供的最大文件数。这可以通过 hdfs-site.xml 中名为 dfs.datanode.max.xcievers 的属性进行控制。

推荐here ,你应该将它的值提高到能让你的工作正常运行的东西,他们推荐 4096:

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

关于java - Hadoop - MultipleOutputs.write - OutofMemory - Java 堆空间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14347712/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com