gpt4 book ai didi

Fetcher : Exceeded MAX_FAILED_UNIQUE_FETCHES 随机播放中的 Hadoop 错误

转载 作者:可可西里 更新时间:2023-11-01 14:19:21 26 4
gpt4 key购买 nike

我是 hadoop 的新手。我在虚拟机上设置了一个支持 kerberos 安全性的 hadoop 集群(主站和 1 个从站)。我正在尝试从 hadoop 示例“pi”运行作业。作业终止并出现错误 Exceeded MAX_FAILED_UNIQUE_FETCHES。我尝试搜索此错误,但互联网上提供的解决方案似乎对我不起作用。也许我遗漏了一些明显的东西。我什至尝试从 etc/hadoop/slaves 文件中删除从站,以查看该作业是否只能在主站上运行,但也失败并出现相同的错误。下面是日志。我在 64 位 Ubuntu 14.04 虚拟机上运行它。任何帮助表示赞赏。

montauk@montauk-vmaster:/usr/local/hadoop$ sudo -u yarn bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar pi 2 10
Number of Maps = 2
Samples per Map = 10
OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/05 12:04:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/06/05 12:04:49 INFO client.RMProxy: Connecting to ResourceManager at /192.168.0.29:8040
14/06/05 12:04:50 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 17 for yarn on 192.168.0.29:54310
14/06/05 12:04:50 INFO security.TokenCache: Got dt for hdfs://192.168.0.29:54310; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:50 INFO input.FileInputFormat: Total input paths to process : 2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: number of splits:2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1401975262053_0007
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:53 INFO impl.YarnClientImpl: Submitted application application_1401975262053_0007
14/06/05 12:04:53 INFO mapreduce.Job: The url to track the job: http://montauk-vmaster:8088/proxy/application_1401975262053_0007/
14/06/05 12:04:53 INFO mapreduce.Job: Running job: job_1401975262053_0007
14/06/05 12:05:29 INFO mapreduce.Job: Job job_1401975262053_0007 running in uber mode : false
14/06/05 12:05:29 INFO mapreduce.Job: map 0% reduce 0%
14/06/05 12:06:04 INFO mapreduce.Job: map 50% reduce 0%
14/06/05 12:06:06 INFO mapreduce.Job: map 100% reduce 0%
14/06/05 12:06:34 INFO mapreduce.Job: map 100% reduce 100%
14/06/05 12:06:34 INFO mapreduce.Job: Task Id : attempt_1401975262053_0007_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:323)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:245)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:347)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)

最佳答案

当我使用 tarball 安装具有 kerberos 安全性的 cdh5.1.0 时,我遇到了与您相同的问题,谷歌找到的解决方案是内存不足,但我不认为这是我的情况,因为我的输入非常小 (52K)。

经过几天的挖掘,我在 this link 中找到了根本原因.

总结该链接中的解决方案可以是:

  1. 在 yarn-site.xml 中添加以下属性,即使它是 yarn-default.xml 中的默认属性


    <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

  2. 删除属性 yarn.nodemanager.local-dirs 并使用默认值/tmp。然后执行以下命令:


    mkdir -p /tmp/hadoop-yarn/nm-local-dir
    chown yarn:yarn /tmp/hadoop-yarn/nm-local-dir

问题可以总结为:

设置yarn.nodemanager.local-dirs属性后,yarn-default.xml中的yarn.nodemanager.aux-services.mapreduce_shuffle.class属性没有工作。

我还没有找到根本原因。

关于Fetcher : Exceeded MAX_FAILED_UNIQUE_FETCHES 随机播放中的 Hadoop 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24066128/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com