gpt4 book ai didi

Hadoop:中间合并失败

转载 作者:可可西里 更新时间:2023-11-01 14:15:34 26 4
gpt4 key购买 nike

我遇到了一个奇怪的问题。当我在大型数据集(>1TB 压缩文本文件)上运行我的 Hadoop 作业时,一些 reduce 任务失败,堆栈跟踪如下:

java.io.IOException: Task: attempt_201104061411_0002_r_000044_0 - The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
at org.apache.hadoop.mapred.Child$4.run(Child.java:240)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:234)
Caused by: java.io.IOException: Intermediate merge failed
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639)
Caused by: java.lang.RuntimeException: java.io.EOFException
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
at org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
at org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
at org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2698)
... 1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:241)
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:125)
... 8 more
java.io.IOException: Task: attempt_201104061411_0002_r_000056_0 - The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
at org.apache.hadoop.mapred.Child$4.run(Child.java:240)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:234)
Caused by: java.io.IOException: Intermediate merge failed
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639)
Caused by: java.lang.RuntimeException: java.io.EOFException
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
at org.apache.hadoop.util.PriorityQueue.upHeap(PriorityQueue.java:123)
at org.apache.hadoop.util.PriorityQueue.put(PriorityQueue.java:50)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:447)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:107)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:93)
at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2689)
... 1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at org.apache.hadoop.io.Text.readString(Text.java:402)
at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:240)
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122)
... 9 more

并非我所有的 reducer 都失败了。在我看到其他人失败之前,有几个人经常成功。如您所见,堆栈跟踪似乎总是源自 IPAndIPCookieCount.readFields() 并且始终处于内存合并阶段,但并不总是来自 readFields 的同一部分.

在较小的数据集(大约 1/30 大小)上运行时,此作业会成功。作业的输出几乎与输入一样多,但每个输出记录更短。这项工作本质上是二次排序的实现。

我们使用的是 CDH3 Hadoop 发行版。

这是我自定义的 WritableComparable 实现:

public static class IpAndIpCookieCount implements WritableComparable<IpAndIpCookieCount> {

private String ip;
private int ipCookieCount;

public IpAndIpCookieCount() {
// empty constructor for hadoop
}

public IpAndIpCookieCount(String ip, int ipCookieCount) {
this.ip = ip;
this.ipCookieCount = ipCookieCount;
}

public String getIp() {
return ip;
}

public int getIpCookieCount() {
return ipCookieCount;
}

@Override
public void readFields(DataInput in) throws IOException {
ip = Text.readString(in);
ipCookieCount = in.readInt();
}

@Override
public void write(DataOutput out) throws IOException {
Text.writeString(out, ip);
out.writeInt(ipCookieCount);
}

@Override
public int compareTo(IpAndIpCookieCount other) {
int firstComparison = ip.compareTo(other.getIp());
if (firstComparison == 0) {
int otherIpCookieCount = other.getIpCookieCount();
if (ipCookieCount == otherIpCookieCount) {
return 0;
} else {
return ipCookieCount < otherIpCookieCount ? 1 : -1;
}
} else {
return firstComparison;
}
}

@Override
public boolean equals(Object o) {
if (o instanceof IpAndIpCookieCount) {
IpAndIpCookieCount other = (IpAndIpCookieCount) o;
return ip.equals(other.getIp()) && ipCookieCount == other.getIpCookieCount();
} else {
return false;
}
}

@Override
public int hashCode() {
return ip.hashCode() ^ ipCookieCount;
}

}

readFields方法很简单,我看不出这个类有什么问题。此外,我还看到其他人获得了基本相同的堆栈跟踪:

似乎没有人真正弄清楚这背后的问题。最后两个似乎表明这可能是内存问题(尽管这些堆栈跟踪不是 OutOfMemoryException)。就像该链接列表中的倒数第二篇文章一样,我尝试将 reducer 的数量设置得更高(最多 999),但我仍然失败。我(还)没有尝试分配更多内存来 reduce task ,因为这需要我们重新配置集群。

这是 Hadoop 中的错误吗?还是我做错了什么?

编辑:我的数据按天分区。如果我运行该作业 7 次,每天一次,则全部 7 次完成。如果我在 7 天内运行一项工作,它就会失败。整个 7 天的大报告将看到与小报告完全相同的键(总计),但显然顺序不同,在相同的 reducer 等处。

最佳答案

我认为这是 Cloudera 向后移植 MAPREDUCE-947 的产物CDH3。此补丁会生成成功作业的 _SUCCESS 文件。

Also a _SUCCESS file is created in the output folder for successful jobs. A configuration parameter mapreduce.fileoutputcommitter.marksuccessfuljobs can be set to false to disable creation of _SUCCESS file, or to true to enable creation of the _SUCCESS file.

查看你的错误,

Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)

并将其与我之前针对此问题看到的错误进行比较,

Exception in thread "main" java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at java.io.DataInputStream.readFully(DataInputStream.java:152)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1465)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1437)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
at org.apache.hadoop.mapred.SequenceFileOutputFormat.getReaders(SequenceFileOutputFormat.java:89)
at org.apache.nutch.crawl.CrawlDbReader.processStatJob(CrawlDbReader.java:323)
at org.apache.nutch.crawl.CrawlDbReader.main(CrawlDbReader.java:511)

Mahout mailing list

Exception in thread "main" java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at java.io.DataInputStream.readFully(DataInputStream.java:152)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1457)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1435)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
at
org.apache.mahout.df.mapreduce.partial.Step0Job.parseOutput(Step0Job.java:145)
at
org.apache.mahout.df.mapreduce.partial.Step0Job.run(Step0Job.java:119)
at
org.apache.mahout.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:115)
at org.apache.mahout.df.mapreduce.Builder.build(Builder.java:338)
at
org.apache.mahout.df.mapreduce.BuildForest.buildForest(BuildForest.java:195)

似乎 DataInputStream.readFully 被这个文件阻塞了。

我建议将 mapreduce.fileoutputcommitter.marksuccessfuljobs 设置为 false 并重试您的工作 - 它应该可以工作。

关于Hadoop:中间合并失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5583303/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com