gpt4 book ai didi

java - Hadoop Map Reduce 字数随机错误 : Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out

转载 作者:可可西里 更新时间:2023-11-01 14:39:08 25 4
gpt4 key购买 nike

我已经使用来自以下站点的手册将 hadoop 安装并配置为单个节点。

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#running-a-mapreduce-job

我已经编译了 wordcount 示例并运行它,但它需要很长时间并生成 Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES;纾困。

hduser@aptunix0043:/usr/local/hadoop/src$ hadoop jar WordCount.jar org/apache/hadoop/examples/WordCount input  ot

****hdfs://localhost:54310/user/hduser/input
12/07/03 02:52:35 INFO input.FileInputFormat: Total input paths to process : 1
12/07/03 02:52:36 INFO mapred.JobClient: Running job: job_201207030248_0002
12/07/03 02:52:37 INFO mapred.JobClient: map 0% reduce 0%
12/07/03 02:52:52 INFO mapred.JobClient: map 100% reduce 0%
12/07/03 03:21:26 INFO mapred.JobClient: Task Id :attempt_201207030248_0002_r_000000_0, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.

12/07/03 03:21:47 WARN mapred.JobClient: Error reading task outputConnection timed out
12/07/03 03:22:08 WARN mapred.JobClient: Error reading task outputConnection timed out
/user/hduser/input/*12/07/03 03:50:01 INFO mapred.JobClient: Task Id : attempt_201207030248_0002_r_000000_1, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
12/07/03 03:50:22 WARN mapred.JobClient: Error reading task outputConnection timed out
12/07/03 03:50:43 WARN mapred.JobClient: Error reading task outputConnection timed out
12/07/03 04:18:35 INFO mapred.JobClient: Task Id : attempt_201207030248_0002_r_000000_2, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
12/07/03 04:18:56 WARN mapred.JobClient: Error reading task outputConnection timed out
12/07/03 04:19:17 WARN mapred.JobClient: Error reading task outputConnection timed out
12/07/03 04:47:15 INFO mapred.JobClient: Job complete: job_201207030248_0002
12/07/03 04:47:15 INFO mapred.JobClient: Counters: 23
12/07/03 04:47:15 INFO mapred.JobClient: Job Counters
12/07/03 04:47:15 INFO mapred.JobClient: Launched reduce tasks=4
12/07/03 04:47:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12948
12/07/03 04:47:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/07/03 04:47:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/07/03 04:47:15 INFO mapred.JobClient: Launched map tasks=1
12/07/03 04:47:15 INFO mapred.JobClient: Data-local map tasks=1
12/07/03 04:47:15 INFO mapred.JobClient: Failed reduce tasks=1
12/07/03 04:47:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=16469
12/07/03 04:47:15 INFO mapred.JobClient: FileSystemCounters
12/07/03 04:47:15 INFO mapred.JobClient: HDFS_BYTES_READ=661744
12/07/03 04:47:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=288616
12/07/03 04:47:15 INFO mapred.JobClient: File Input Format Counters
12/07/03 04:47:15 INFO mapred.JobClient: Bytes Read=661630
12/07/03 04:47:15 INFO mapred.JobClient: Map-Reduce Framework
12/07/03 04:47:15 INFO mapred.JobClient: Map output materialized bytes=267085
12/07/03 04:47:15 INFO mapred.JobClient: Combine output records=18040
12/07/03 04:47:15 INFO mapred.JobClient: Map input records=12761
12/07/03 04:47:15 INFO mapred.JobClient: Physical memory (bytes) snapshot=183209984
12/07/03 04:47:15 INFO mapred.JobClient: Spilled Records=18040
12/07/03 04:47:15 INFO mapred.JobClient: Map output bytes=1086716
12/07/03 04:47:15 INFO mapred.JobClient: CPU time spent (ms)=1940
12/07/03 04:47:15 INFO mapred.JobClient: Total committed heap usage (bytes)=162856960
12/07/03 04:47:15 INFO mapred.JobClient: Virtual memory (bytes) snapshot=393482240
12/07/03 04:47:15 INFO mapred.JobClient: Combine input records=109844
12/07/03 04:47:15 INFO mapred.JobClient: Map output records=109844
12/07/03 04:47:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=114

有什么线索吗?

最佳答案

为了像我这样在互联网上搜索并到达此页面的人的利益,您可以在这里点击 2 个问题

  1. DNS 解析 - 确保在安装 hadoop 时为每个主机使用完全限定的域名

  2. 防火墙 - 防火墙可能会根据您的 hadoop 分布阻止端口 50060、50030 和更多端口(cloudera 为 7182、7180)

关于java - Hadoop Map Reduce 字数随机错误 : Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11308845/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com