gpt4 book ai didi

hadoop - 当以非 hadoop 用户身份运行时,Yarn MapReduce approximate-pi 示例失败退出代码 1

转载 作者:可可西里 更新时间:2023-11-01 15:12:19 24 4
gpt4 key购买 nike

我正在使用 Hadoop 2.6.2 和 yarn 运行一个小型私有(private) linux 机器集群。我从 linux 边缘节点启动 yarn 作业。当由 hadoop( super 用户,集群的所有者)用户运行时,用于近似 pi 值的 jar 装 Yarn 示例完美运行,但在边缘节点上从我的个人帐户运行时失败。在这两种情况下(hadoop,我)我都像这样运行作业:

clott@edge: /home/hadoop/hadoop-2.6.2/bin/yarn jar /home/hadoop/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5

它失败了;完整的输出如下。我认为找不到文件的异常完全是假的。我认为某些原因导致容器启动失败,因此找不到任何输出。是什么导致容器启动失败,如何调试?

因为这个相同的相同命令在由 hadoop 用户运行时运行良好,但在由同一边缘节点上的不同帐户运行时却无法正常运行,我怀疑存在权限或其他 yarn 配置问题;我不怀疑缺少 jar 文件问题。我的个人帐户使用与 hadoop 帐户相同的环境变量,这是值得的。

这些问题很相似,但我没有找到解决方案:

https://issues.cloudera.org/browse/DISTRO-577

Running a map reduce job as a different user

Yarn MapReduce Job Issue - AM Container launch error in Hadoop 2.3.0

我已经尝试过这些补救措施但没有成功:

  1. 在 core-site.xml 中,将 hadoop.tmp.dir 的值设置为/tmp/temp-${user.name}

  2. 将我的个人用户帐户添加到集群中的每个节点

我猜想许多安装仅由一个用户运行,但我试图让两个人在集群上一起工作,而不会破坏彼此的中间结果。我完全疯了吗?

完整输出:

Number of Maps  = 2
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/12/22 15:29:18 INFO client.RMProxy: Connecting to ResourceManager at ac1.mycompany.com/1.2.3.4:8032
15/12/22 15:29:18 INFO input.FileInputFormat: Total input paths to process : 2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: number of splits:2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450815437271_0002
15/12/22 15:29:19 INFO impl.YarnClientImpl: Submitted application application_1450815437271_0002
15/12/22 15:29:19 INFO mapreduce.Job: The url to track the job: http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/
15/12/22 15:29:19 INFO mapreduce.Job: Running job: job_1450815437271_0002
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 running in uber mode : false
15/12/22 15:29:31 INFO mapreduce.Job: map 0% reduce 0%
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 failed with state FAILED due to: Application application_1450815437271_0002 failed 2 times due to AM Container for appattempt_1450815437271_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450815437271_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/12/22 15:29:31 INFO mapreduce.Job: Counters: 0
Job Finished in 13.489 seconds
java.io.FileNotFoundException: File does not exist: hdfs://ac1.mycompany.com/user/clott/QuasiMonteCarlo_1450816156703_163431099/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

最佳答案

是的,Manjunath Ballur,您是对的,这是一个权限问题!终于学会了如何保存yarn应用日志,很明显的暴露了问题。以下是步骤:

  1. 编辑 yarn-site.xml 并添加一个属性来延迟删除 yarn 日志:

    <property>
    <name>yarn.nodemanager.delete.debug-delay-sec</name>
    <value>600</value>
    </property>
  2. 将 yarn-site.xml 推送到所有节点(ARGH 我已经忘记这个很长时间了)并重启集群。

  3. 如上所示运行 yarn example 来估计 pi,它失败了。看http://namenode:8088/cluster/apps/FAILED要查看失败的应用程序,请单击最近失败的链接,查看底部以查看使用了集群中的哪些节点。

  4. 在集群中应用失败的节点之一上打开一个窗口。找到工作目录,在我的例子中是

    ~hadoop/hadoop-2.6.2/logs/userlogs/application_1450815437271_0004/container_1450‌​815437271_0004_01_000001/
  5. 等等,我看到了文件 stdout(只有 log4j 的问题)、stderr(几乎是空的)和 syslog(赢家吃鸡)。在系统日志文件中,我发现了这个 gem:

    2015-12-23 08:31:42,376 INFO [main] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=clott, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/history":hadoop:supergroup:drwxrwx---

所以问题出在 hdfs:///tmp/hadoop-yarn/staging/history 的权限上。一个简单的 chmod 777 使我正确,我不再与组烫发作斗争。现在,非 hadoop 非 super 用户可以运行 yarn 作业。

关于hadoop - 当以非 hadoop 用户身份运行时,Yarn MapReduce approximate-pi 示例失败退出代码 1,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34424307/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com