gpt4 book ai didi

hadoop - Apache Pig : java. lang.OutOfMemoryError:请求的数组大小超出 VM 限制

转载 作者:可可西里 更新时间:2023-11-01 16:01:16 29 4
gpt4 key购买 nike

我正在运行 Pig 15 并试图在这里对数据进行分组。我遇到了 Requested array size exceeds VM limit 错误。文件非常小,只需要 10 个映射器,每个 2.5G 就可以运行,没有内存错误。

下面显示的是我正在做的事情的片段:

sample_set = LOAD 's3n://<bucket>/<dev_dir>/000*-part.gz' USING PigStorage(',') AS (col1:chararray,col2:chararray..col23:chararray);
sample_set_group_by_col1 = GROUP sample_set BY col1;
sample_set_group_by_col1_10 = LIMIT sample_set_group_by_col1 10;
DUMP sample_set_group_by_col1_10;

此作业因以下错误而失败:

2016-08-08 14:28:59,622 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:401)
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:323)
at org.apache.pig.data.utils.SedesHelper.writeChararray(SedesHelper.java:66)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:580)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:462)
at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:650)
at org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:641)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:474)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:462)
at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:650)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:470)
at org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:40)
at org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:139)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:198)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(MapTask.java:1696)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1180)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:712)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:135)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)

有没有人遇到过这个错误?如果是,解决方案是什么?

最佳答案

从错误来看,GROUP BY 语句的输出似乎很大。创建的这个巨大记录不适合当前可用的 Java 堆空间。通常 Hadoop Mappers 和 Reducer 任务会分配 1GB 的堆空间。尝试通过以下参数在运行此 pig 脚本时增加 java 堆大小。

SET mapreduce.map.memory.mb 4096;
SET mapreduce.reduce.memory.mb 6144;

如果仍然不起作用,请尝试使用上述参数增加尺寸。

关于hadoop - Apache Pig : java. lang.OutOfMemoryError:请求的数组大小超出 VM 限制,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38852893/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com