gpt4 book ai didi

Hadoop 2.7.0 - MapReduce 作业未运行 - 因 AM 容器错误而失败

转载 作者:可可西里 更新时间:2023-11-01 14:49:02 25 4
gpt4 key购买 nike

我在 Fedora 22 虚拟机上以伪节点模式使用 Hadoop 2.7.0。几天前,MapReduce 作业运行良好,但在安装 Oozie 并对 yarn-site.xml 进行了修改之后。我在运行 Pi 示例作业时遇到以下错误,我可能无法调试错误,

已编辑 - 我使用命令行运行作业,使用oozie工作流引擎..命令 - hadoop jar 10 100

Starting Job
15/12/17 15:22:05 INFO client.RMProxy: Connecting to ResourceManager at /192.168.122.1:8032
15/12/17 15:22:06 INFO input.FileInputFormat: Total input paths to process : 10
15/12/17 15:22:06 INFO mapreduce.JobSubmitter: number of splits:10
15/12/17 15:22:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450326099697_0001
15/12/17 15:22:07 INFO impl.YarnClientImpl: Submitted application application_1450326099697_0001
15/12/17 15:22:07 INFO mapreduce.Job: The url to track the job: http://hadoop:8088/proxy/application_1450326099697_0001/
15/12/17 15:22:07 INFO mapreduce.Job: Running job: job_1450326099697_0001
15/12/17 15:22:17 INFO mapreduce.Job: Job job_1450326099697_0001 running in uber mode : false
15/12/17 15:22:17 INFO mapreduce.Job: map 0% reduce 0%
15/12/17 15:22:24 INFO mapreduce.Job: map 10% reduce 0%
15/12/17 15:22:30 INFO mapreduce.Job: map 20% reduce 0%
15/12/17 15:22:36 INFO mapreduce.Job: map 30% reduce 0%
15/12/17 15:22:42 INFO mapreduce.Job: map 40% reduce 0%
15/12/17 15:22:46 INFO mapreduce.Job: map 50% reduce 0%
15/12/17 15:22:51 INFO mapreduce.Job: map 60% reduce 0%
15/12/17 15:22:56 INFO mapreduce.Job: map 70% reduce 0%
15/12/17 15:23:01 INFO mapreduce.Job: map 80% reduce 0%
15/12/17 15:23:07 INFO mapreduce.Job: map 90% reduce 0%
15/12/17 15:23:13 INFO mapreduce.Job: map 100% reduce 0%
15/12/17 15:23:18 INFO mapreduce.Job: map 100% reduce 100%
15/12/17 15:23:23 INFO ipc.Client: Retrying connect to server: vlan722-rsvd-router.ddr.priv/192.168.122.1:34460. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3,
sleepTime=1000 MILLISECONDS)
15/12/17 15:23:24 INFO ipc.Client: Retrying connect to server: vlan722-rsvd-router.ddr.priv/192.168.122.1:34460. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3,
sleepTime=1000 MILLISECONDS)
15/12/17 15:23:25 INFO ipc.Client: Retrying connect to server: vlan722-rsvd-router.ddr.priv/192.168.122.1:34460. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3,
sleepTime=1000 MILLISECONDS)
15/12/17 15:23:28 INFO mapreduce.Job: map 0% reduce 0%
15/12/17 15:23:28 INFO mapreduce.Job: Job job_1450326099697_0001 failed with state FAILED due to: Application application_1450326099697_0001 failed 2 times due to AM Container for
appattempt_1450326099697_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://hadoop:8088/cluster/app/application_1450326099697_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450326099697_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/12/17 15:23:28 INFO mapreduce.Job: Counters: 0
Job Finished in 82.924 seconds
Estimated value of Pi is 3.14800000000000000000

yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.application.classpath</name>
<value>
/home/osboxes/hadoop/etc/hadoop,
/home/osboxes/hadoop/share/hadoop/common/*,
/home/osboxes/hadoop/share/hadoop/common/lib/*,
/home/osboxes/hadoop/share/hadoop/hdfs/*,
/home/osboxes/hadoop/share/hadoop/hdfs/lib/*,
/home/osboxes/hadoop/share/hadoop/yarn/*,
/home/osboxes/hadoop/share/hadoop/yarn/lib/*,
/home/osboxes/hadoop/share/hadoop/mapreduce/*,
/home/osboxes/hadoop/share/hadoop/mapreduce/lib/*
</value>
</property>

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>5120</value>
</property>

<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>

<property>
<name>yarn.resourcemanager.address</name>
<value>http://192.168.122.1:8032</value>
</property>

<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>http://192.168.122.1:8030</value>
</property>

<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>http://192.168.122.1:8031</value>
</property>

<property>
<name>yarn.nodemanager.address</name>
<value>http://192.168.122.1:8041</value>
</property>

如能提供任何帮助,我们将不胜感激。

编辑 - yarn-site.xml 之前

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

最佳答案

最后我通过对 ma​​pred-site.xml 进行以下更改解决了这个问题,

<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>http://localhost:19888</value>
</property>

在那之后,作业运行得非常好。

关于Hadoop 2.7.0 - MapReduce 作业未运行 - 因 AM 容器错误而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34326686/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com