gpt4 book ai didi

hadoop - 基于 maven 生成的源运行 hadoop

转载 作者:可可西里 更新时间:2023-11-01 16:22:04 27 4
gpt4 key购买 nike

我正在尝试对 hadoop 框架进行一些更改,但我在设置我的开发环境时遇到了困难。我已经从 git 中克隆了 hadoop 并生成了所有 java 项目以使用 maven 导入到 eclipse 中,如此处所述 EclipseEnvironment .在 eclipse 中导入所有项目后,我生成了一个正常的 java 项目,它应该在 hadoop 中运行一个作业,我在项目的构建路径上为 hadoop-common 和 hadoop-mapreduce-client-core 设置了两个项目依赖项,所有依赖项都已解决。

当我运行项目时出现错误

2013-05-23 12:58:01,531 ERROR util.Shell (Shell.java:checkHadoopHome(230)) - Failed to     detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:213)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:236)
at org.apache.hadoop.util.PlatformName.<clinit>(PlatformName.java:36)
at org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:314)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:359)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2512)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2504)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:323)
at WordCount.main(WordCount.java:86)
2013-05-23 12:58:01,546 INFO util.Shell (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
2013-05-23 12:58:01,730 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-05-23 12:58:02,065 ERROR security.UserGroupInformation (UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:elma (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:119)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:81)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1229)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1225)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1253)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1277)
at WordCount.main(WordCount.java:100)

那么我怎样才能让新的 java 项目基于我在 eclipse 中的 hadoop 源项目运行呢?

最佳答案

由于您的问题是“我如何根据 maven 生成的源代码运行 hadoop?”,我假设您已经能够成功运行 vanilla Hadoop。如果你有,那么你只需要复制你自己的 jar(通过 eclipse 或在命令行中使用 maven 生成)并替换你的 vanilla Hadoop 发行版(相同版本)的 jar。这应该可以解决问题并节省您处理配置问题的时间。

关于hadoop - 基于 maven 生成的源运行 hadoop,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16712665/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com