- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
当我使用Windows SDK 7.1命令提示符通过 mvn clean编译命令构建apache / hadoop 3.0.0时,但是在进入此过程时遇到一些问题,如何解决此问题?非常感谢你。
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop Common 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 0.504 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 0.584 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 0.627 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 2.683 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 0.112 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.188 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 1.092 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 1.873 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 1.554 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 0.400 s]
[INFO] Apache Hadoop Common ............................... FAILURE [ 0.021 s]
[INFO] Apache Hadoop NFS .................................. SKIPPED
[INFO] Apache Hadoop KMS .................................. SKIPPED
[INFO] Apache Hadoop Common Project ....................... SKIPPED
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12.969 s
[INFO] Finished at: 2015-07-14T10:50:53+08:00
[INFO] Final Memory: 56M/269M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to parse plugin descriptor for org.apache.hadoop:hadoop-maven-plu
gins:3.0.0-SNAPSHOT (C:\Users\mingleiz\mygit\hadoop\hadoop-maven-plugins\target\
classes): No plugin descriptor found at META-INF/maven/plugin.xml -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginDescript
orParsingException
最佳答案
我现在解决这个问题。以下是我的解决方案:
1:set Platform = x64(在64位系统上构建时)
2:必须安装VS 2010 Professional和SP1。
好,这是我大楼的结尾。
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 2.794 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 1.582 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 1.160 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 2.250 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 0.629 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.349 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 2.902 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 2.578 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 5.007 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 2.580 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [01:44 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 3.074 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 22.619 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.071 s]
[INFO] Apache Hadoop HDFS Client .......................... SUCCESS [ 18.156 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [01:50 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [01:09 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 37.003 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 1.932 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.071 s]
[INFO] Apache Hadoop YARN ................................. SUCCESS [ 0.076 s]
[INFO] Apache Hadoop YARN API ............................. SUCCESS [ 9.896 s]
[INFO] Apache Hadoop YARN Common .......................... SUCCESS [01:24 min]
[INFO] Apache Hadoop YARN Server .......................... SUCCESS [ 0.088 s]
[INFO] Apache Hadoop YARN Server Common ................... SUCCESS [ 4.608 s]
[INFO] Apache Hadoop YARN NodeManager ..................... SUCCESS [ 8.123 s]
[INFO] Apache Hadoop YARN Web Proxy ....................... SUCCESS [ 1.975 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SUCCESS [ 14.526 s]
[INFO] Apache Hadoop YARN ResourceManager ................. SUCCESS [ 15.525 s]
[INFO] Apache Hadoop YARN Server Tests .................... SUCCESS [ 1.688 s]
[INFO] Apache Hadoop YARN Client .......................... SUCCESS [ 3.273 s]
[INFO] Apache Hadoop YARN SharedCacheManager .............. SUCCESS [ 1.601 s]
[INFO] Apache Hadoop YARN Applications .................... SUCCESS [ 0.058 s]
[INFO] Apache Hadoop YARN DistributedShell ................ SUCCESS [ 1.453 s]
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SUCCESS [ 0.789 s]
[INFO] Apache Hadoop YARN Site ............................ SUCCESS [ 0.074 s]
[INFO] Apache Hadoop YARN Registry ........................ SUCCESS [ 2.915 s]
[INFO] Apache Hadoop YARN Project ......................... SUCCESS [ 0.170 s]
[INFO] Apache Hadoop MapReduce Client ..................... SUCCESS [ 0.117 s]
[INFO] Apache Hadoop MapReduce Core ....................... SUCCESS [ 11.576 s]
[INFO] Apache Hadoop MapReduce Common ..................... SUCCESS [ 6.131 s]
[INFO] Apache Hadoop MapReduce Shuffle .................... SUCCESS [ 1.574 s]
[INFO] Apache Hadoop MapReduce App ........................ SUCCESS [ 8.106 s]
[INFO] Apache Hadoop MapReduce HistoryServer .............. SUCCESS [ 3.868 s]
[INFO] Apache Hadoop MapReduce JobClient .................. SUCCESS [ 16.826 s]
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SUCCESS [ 0.649 s]
[INFO] Apache Hadoop MapReduce NativeTask ................. SUCCESS [ 2.308 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 3.706 s]
[INFO] Apache Hadoop MapReduce ............................ SUCCESS [ 0.145 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 7.313 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 30.467 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [ 0.925 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 4.658 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 3.211 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 1.279 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 0.441 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [ 1.121 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 0.063 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 1.489 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [ 41.175 s]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 10.494 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 0.321 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.165 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 2.467 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 0.356 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.051 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 0.109 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11:47 min
[INFO] Finished at: 2015-07-14T18:09:24+08:00
[INFO] Final Memory: 227M/583M
[INFO] ------------------------------------------------------------------------
关于java - 在Windows 7 64位以及Java 8和Maven3.3.3上构建hadoop 3.0.0-SNAPSHOT时出现问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31396991/
我们有数据(此时未分配)要转换/聚合/透视到 wazoo。 我在 www 上看了看,我问的所有答案都指向 hadoop 可扩展、运行便宜(没有 SQL 服务器机器和许可证)、快速(如果你有足够的数据)
这很明显,我们都同意我们可以将 HDFS + YARN + MapReduce 称为 Hadoop。但是,Hadoop 生态系统中的其他不同组合和其他产品会怎样? 例如,HDFS + YARN + S
如果 es-hadoop 只是连接到 HDFS 的 Hadoop 连接器,它如何支持 Hadoop 分析? 最佳答案 我假设您指的是 this project .在这种情况下,ES Hadoop 项目
看完this和 this论文,我决定我想在 MapReduce 上为大型数据集实现分布式体积渲染设置作为我的本科论文工作。 Hadoop 是一个合理的选择吗? Java 不会扼杀一些性能提升或使与 C
我一直在尝试查找有关如何通过命令行提交 hadoop 作业的信息。 我知道命令 - hadoop jar jar-file 主类输入输出 还有另一个命令,我正在尝试查找有关它的信息,但未能找到 - h
Hadoop 服务器在 Kubernetes 中。而Hadoop客户端位于外网。所以我尝试使用 kubernetes-service 来使用 Hadoop 服务器。但是 hadoop fs -put
有没有人遇到奇怪的环境问题,在调用 hadoop 命令时被迫使用 SU 而不是 SUDO? sudo su -c 'hadoop fs -ls /' hdfs Found 4 itemsdrwxr-x
在更改 mapred-site.xml 中的属性后,我给出了一个 tar.bz2 文件、.gz 和 tar.gz 文件作为输入。以上似乎都没有奏效。我假设这里发生的是 hadoop 作为输入读取的记录
如何在 Hadoop Pipes 中获取正在 hadoop 映射器 中执行的输入文件 名称? 我可以很容易地在基于 java 的 map reducer 中获取文件名,比如 FileSplit fil
我想使用 MapReduce 方法分析连续的数据流(通过 HTTP 访问),因此我一直在研究 Apache Hadoop。不幸的是,Hadoop 似乎期望以固定大小的输入文件开始作业,而不是能够在新数
名称节点可以执行任务吗?默认情况下,任务在集群的数据节点上执行。 最佳答案 假设您正在询问MapReduce ... 使用YARN,MapReduce任务在应用程序主数据库中执行,而不是在nameno
我有一个关系A包含 (zip-code). 我还有另一个关系B包含 (name:gender:zip-code) (x:m:1234) (y:f:1234) (z:m:1245) (s:f:1235)
我是hadoop地区的新手。您能帮我负责(k2,list[v2,v2,v2...])形式的输出(意味着将键及其所有关联值组合在一起)的责任是吗? 谢谢。 最佳答案 这是Hadoop的MapReduce
因此,我一直在尝试编写一个hadoop程序,该程序将输入作为一个包含许多文件的文件,并且我希望hadoop程序的输出仅是输入文件的一行。但是我还没有做到这一点。我也不想去 reducer 课。如果有人
我使用的输入文本文件的内容是 1 "Come 1 "Defects," 1 "I 1 "Information 1 "J" 2 "Plain 5 "Project 1
谁能告诉我以下grep命令的作用: $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' 最佳答案 http:/
我不了解mapreducer的基本功能,mapreducer是否有助于将文件放入HDFS 或mapreducer仅有助于分析HDFS中现有文件中的内容 我对hadoop非常陌生,任何人都可以指导我理解
CopyFromLocal将从本地文件系统上载数据。 不要放会从任何文件上传数据,例如。本地FS,亚马逊S3 或仅来自本地fs ??? 最佳答案 请找到两个命令的用法。 put ======= Usa
我开始研究hadoop mapreduce。 我是Java和hadoop的初学者,并且了解hadoop mapreduce的编码,但是有兴趣了解它在云中的内部工作方式。 您能否分享一些很好的链接来说明
我一直在寻找Hadoop mapreduce类的类路径。我正在使用Hortonworks 2.2.4版沙箱。我需要这样的类路径来运行我的javac编译器: javac -cp (CLASS_PATH)
我是一名优秀的程序员,十分优秀!