- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我关注了this tutorial用于在 Windows 7 环境中构建 Apache Hadoop。长话短说。我可以使用 mvn compile
命令编译 Hadoop,并可以使用 mvn -package -DskipTests
但是我无法mvn package -Pdist,native-win -DskipTests -Dtar
我收到 I/O 异常并且无法解决这些异常。在没有 -Dtar
参数的情况下构建 Hadoop 时,我没有得到这些异常
有人可以帮我解决这些异常吗?
[INFO] Executing tasks
main:
[get] Destination already exists (skipping): C:\hadoop\hadoop-hdfs- project\hadoop-hdfs-httpfs\downloads\tomcat.tar.gz
[mkdir] Created dir: C:\hadoop\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\tomcat.exp
[exec] tar (child): C\:hadoophadoop-hdfs-projecthadoop-hdfs-httpfs/downloads/tomcat.tar.gz: Cannot open: I/O error
[exec] tar (child): Error is not recoverable: exiting now
[exec]
[exec] gzip: stdin: unexpected end of file
[exec] tar: Child returned status 2
[exec] tar: Error exit delayed from previous errors
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 1.018 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 1.653 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [ 2.181 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.200 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 2.889 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 1.957 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 1.570 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [ 50.085 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.090 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [ 35.510 s]
[INFO] Apache Hadoop HttpFS .............................. FAILURE [ 5.155 s]
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] hadoop-yarn ....................................... SKIPPED
[INFO] hadoop-yarn-api ................................... SKIPPED
[INFO] hadoop-yarn-common ................................ SKIPPED
[INFO] hadoop-yarn-server ................................ SKIPPED
[INFO] hadoop-yarn-server-common ......................... SKIPPED
[INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
[INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
[INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
[INFO] hadoop-yarn-server-tests .......................... SKIPPED
[INFO] hadoop-yarn-client ................................ SKIPPED
[INFO] hadoop-mapreduce-client ........................... SKIPPED
[INFO] hadoop-mapreduce-client-core ...................... SKIPPED
[INFO] hadoop-yarn-applications .......................... SKIPPED
[INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
[INFO] hadoop-yarn-site .................................. SKIPPED
[INFO] hadoop-yarn-project ............................... SKIPPED
[INFO] hadoop-mapreduce-client-common .................... SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
[INFO] hadoop-mapreduce-client-app ....................... SKIPPED
[INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
[INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
[INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
[INFO] hadoop-mapreduce .................................. SKIPPED
[INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
[INFO] Apache Hadoop Distributed Copy .................... SKIPPED
[INFO] Apache Hadoop Archives ............................ SKIPPED
[INFO] Apache Hadoop Rumen ............................... SKIPPED
[INFO] Apache Hadoop Gridmix ............................. SKIPPED
[INFO] Apache Hadoop Data Join ........................... SKIPPED
[INFO] Apache Hadoop Extras .............................. SKIPPED
[INFO] Apache Hadoop Pipes ............................... SKIPPED
[INFO] Apache Hadoop Tools Dist .......................... SKIPPED
[INFO] Apache Hadoop Tools ............................... SKIPPED
[INFO] Apache Hadoop Distribution ........................ SKIPPED
[INFO] Apache Hadoop Client .............................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:43 min
[INFO] Finished at: 2014-05-19T11:24:25+00:00
[INFO] Final Memory: 49M/179M
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "native-win" could not be activated because it does not
exist.
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run dist) on project hadoop-hdfs-httpfs: An Ant BuildExcept ion has occured: exec returned: 2 - > [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the
following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-hdfs-httpfs
c:\hadoop>
最佳答案
如果您使用的是更高版本的 Hadoop,即 Hadoop-2.6、2.7 或 2.8,则无需构建 Hadoop-src 来获取 Windows 原生 Hadoop。这是一个 GitHub 链接,其中包含适用于最新版本的 Hadoop 的 winutils。
我在使用 maven 构建 Hadoop-src 时也遇到了类似的问题,这些步骤对我有用。
Download & 在 c:/java/
(make sure the path is this way, if java is installed in program files, then hadoop-env.cmd will not recognize java path )
Download Hadoop 二进制分发。
(I am using binary distribution Hadoop-2.8.1)
设置环境变量:
JAVA_HOME = "c:/Java"
HADOOP_HOME="<your hadoop home>"
Path= "JAVA_HOME/bin"
Path = "HADOOP_HOME/bin"
Hadoop will work on windows if Hadoop-src is built using maven in your windows machine. Building the Hadoop-src(distribution) will create a Hadoop binary distribution, which will work as windows native version.
但如果您不想这样做,请下载预构建的 Hadoop 发行版的 winutils。
这是一个 GitHub link ,其中包含某些版本的 Hadoop 的 winutils。
(if the version you are using is not in the list, the follow the conventional method for setting up Hadoop on windows - link)
如果你找到你的版本,然后复制粘贴文件夹的所有内容到路径:/bin/
Set all the .xml configuration files - Link & set JAVA_HOME path in hadoop-env.cmd file
从 cmd 转到:
<HADOOP_HOME>/bin/> hdfs namenode -format
<HADOOP_HOME>/sbin> start-all.cmd
希望这对您有所帮助。
关于hadoop - 在 Windows 7 上构建 Hadoop,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23735504/
我们有数据(此时未分配)要转换/聚合/透视到 wazoo。 我在 www 上看了看,我问的所有答案都指向 hadoop 可扩展、运行便宜(没有 SQL 服务器机器和许可证)、快速(如果你有足够的数据)
这很明显,我们都同意我们可以将 HDFS + YARN + MapReduce 称为 Hadoop。但是,Hadoop 生态系统中的其他不同组合和其他产品会怎样? 例如,HDFS + YARN + S
如果 es-hadoop 只是连接到 HDFS 的 Hadoop 连接器,它如何支持 Hadoop 分析? 最佳答案 我假设您指的是 this project .在这种情况下,ES Hadoop 项目
看完this和 this论文,我决定我想在 MapReduce 上为大型数据集实现分布式体积渲染设置作为我的本科论文工作。 Hadoop 是一个合理的选择吗? Java 不会扼杀一些性能提升或使与 C
我一直在尝试查找有关如何通过命令行提交 hadoop 作业的信息。 我知道命令 - hadoop jar jar-file 主类输入输出 还有另一个命令,我正在尝试查找有关它的信息,但未能找到 - h
Hadoop 服务器在 Kubernetes 中。而Hadoop客户端位于外网。所以我尝试使用 kubernetes-service 来使用 Hadoop 服务器。但是 hadoop fs -put
有没有人遇到奇怪的环境问题,在调用 hadoop 命令时被迫使用 SU 而不是 SUDO? sudo su -c 'hadoop fs -ls /' hdfs Found 4 itemsdrwxr-x
在更改 mapred-site.xml 中的属性后,我给出了一个 tar.bz2 文件、.gz 和 tar.gz 文件作为输入。以上似乎都没有奏效。我假设这里发生的是 hadoop 作为输入读取的记录
如何在 Hadoop Pipes 中获取正在 hadoop 映射器 中执行的输入文件 名称? 我可以很容易地在基于 java 的 map reducer 中获取文件名,比如 FileSplit fil
我想使用 MapReduce 方法分析连续的数据流(通过 HTTP 访问),因此我一直在研究 Apache Hadoop。不幸的是,Hadoop 似乎期望以固定大小的输入文件开始作业,而不是能够在新数
名称节点可以执行任务吗?默认情况下,任务在集群的数据节点上执行。 最佳答案 假设您正在询问MapReduce ... 使用YARN,MapReduce任务在应用程序主数据库中执行,而不是在nameno
我有一个关系A包含 (zip-code). 我还有另一个关系B包含 (name:gender:zip-code) (x:m:1234) (y:f:1234) (z:m:1245) (s:f:1235)
我是hadoop地区的新手。您能帮我负责(k2,list[v2,v2,v2...])形式的输出(意味着将键及其所有关联值组合在一起)的责任是吗? 谢谢。 最佳答案 这是Hadoop的MapReduce
因此,我一直在尝试编写一个hadoop程序,该程序将输入作为一个包含许多文件的文件,并且我希望hadoop程序的输出仅是输入文件的一行。但是我还没有做到这一点。我也不想去 reducer 课。如果有人
我使用的输入文本文件的内容是 1 "Come 1 "Defects," 1 "I 1 "Information 1 "J" 2 "Plain 5 "Project 1
谁能告诉我以下grep命令的作用: $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' 最佳答案 http:/
我不了解mapreducer的基本功能,mapreducer是否有助于将文件放入HDFS 或mapreducer仅有助于分析HDFS中现有文件中的内容 我对hadoop非常陌生,任何人都可以指导我理解
CopyFromLocal将从本地文件系统上载数据。 不要放会从任何文件上传数据,例如。本地FS,亚马逊S3 或仅来自本地fs ??? 最佳答案 请找到两个命令的用法。 put ======= Usa
我开始研究hadoop mapreduce。 我是Java和hadoop的初学者,并且了解hadoop mapreduce的编码,但是有兴趣了解它在云中的内部工作方式。 您能否分享一些很好的链接来说明
我一直在寻找Hadoop mapreduce类的类路径。我正在使用Hortonworks 2.2.4版沙箱。我需要这样的类路径来运行我的javac编译器: javac -cp (CLASS_PATH)
我是一名优秀的程序员,十分优秀!