gpt4 book ai didi

hadoop - hadoop中不同reducer中的相同键

转载 作者:行者123 更新时间:2023-12-02 21:51:03 26 4
gpt4 key购买 nike

我正在经历一些非常奇怪的事情。我在不同的 reducer 中得到相同的 key 。我刚刚打印并收集了键和值。我的 reducer 代码如下所示。

public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {

System.out.println("The key is "+ key.toString());

while(values.hasNext()){


Text value=values.next();

key.set("");
output.collect(key, value);

}
}

控制台上的输出是
  The key is 111-00-1234195967001
The key is 1234529857009
The key is 1234529857009
14/01/06 20:11:16 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/01/06 20:11:16 INFO mapred.LocalJobRunner:
14/01/06 20:11:16 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/01/06 20:11:16 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:54310/user/hduser/joboutput11
14/01/06 20:11:18 INFO mapred.LocalJobRunner: reduce > reduce
14/01/06 20:11:18 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
14/01/06 20:11:19 INFO mapred.JobClient: map 100% reduce 100%
14/01/06 20:11:19 INFO mapred.JobClient: Job complete: job_local_0001
14/01/06 20:11:19 INFO mapred.JobClient: Counters: 23
14/01/06 20:11:19 INFO mapred.JobClient: File Input Format Counters
14/01/06 20:11:19 INFO mapred.JobClient: Bytes Read=289074
14/01/06 20:11:19 INFO mapred.JobClient: File Output Format Counters
14/01/06 20:11:19 INFO mapred.JobClient: Bytes Written=5707
14/01/06 20:11:19 INFO mapred.JobClient: FileSystemCounters
14/01/06 20:11:19 INFO mapred.JobClient: FILE_BYTES_READ=19185
14/01/06 20:11:19 INFO mapred.JobClient: HDFS_BYTES_READ=1254215
14/01/06 20:11:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=270933
14/01/06 20:11:19 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=5707
14/01/06 20:11:19 INFO mapred.JobClient: Map-Reduce Framework
14/01/06 20:11:19 INFO mapred.JobClient: Map output materialized bytes=5633
14/01/06 20:11:19 INFO mapred.JobClient: Map input records=5
14/01/06 20:11:19 INFO mapred.JobClient: Reduce shuffle bytes=0
14/01/06 20:11:19 INFO mapred.JobClient: Spilled Records=10
14/01/06 20:11:19 INFO mapred.JobClient: Map output bytes=5583
14/01/06 20:11:19 INFO mapred.JobClient: Total committed heap usage (bytes)=991539200
14/01/06 20:11:19 INFO mapred.JobClient: CPU time spent (ms)=0
14/01/06 20:11:19 INFO mapred.JobClient: Map input bytes=289074
14/01/06 20:11:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=627
14/01/06 20:11:19 INFO mapred.JobClient: Combine input records=0
14/01/06 20:11:19 INFO mapred.JobClient: Reduce input records=5
14/01/06 20:11:19 INFO mapred.JobClient: Reduce input groups=3
14/01/06 20:11:19 INFO mapred.JobClient: Combine output records=0
14/01/06 20:11:19 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/01/06 20:11:19 INFO mapred.JobClient: Reduce output records=7
14/01/06 20:11:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/01/06 20:11:19 INFO mapred.JobClient: Map output records=5

key 1234529857009重复两次,异常。任何想法为什么会发生这种情况。

谢谢

最佳答案

由于hadoop的speculative execution不能保证每个键在执行期间只进入 reducer 一次。您要注意的是完成的输出,而不是进程中的状态。因此,如果您用恒等化简器替换该化简器,并且您仍然在输出中看到重复的行,那么您需要担心一些事情。否则,它可能会按预期工作,因为 hadoop 可能会生成多个 reducer 作业,但只会保留其中一个的输出(通常是最先完成的那个)。

关于hadoop - hadoop中不同reducer中的相同键,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20932285/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com