gpt4 book ai didi

hadoop - 运行内置 “compute pi” hadoop作业的命令

转载 作者:行者123 更新时间:2023-12-02 20:36:31 25 4
gpt4 key购买 nike

我正在尝试在Azure实例上安装Open Edx Insights。 LMS和Insights都在同一盒子上。作为安装的一部分,我已经通过yml脚本安装了Hadoop,hive等。现在,下一条指令是测试hadoop的安装,并且该文档要求计算“pi”的值。为此,他们给出了以下命令:

hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100

但是在我运行此命令后,它给出错误为:
hadoop@MillionEdx:~$ hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100
Unknown program 'hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar' chosen.

Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
dbcount: An example job that count the pageview counts from a database.
distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
grep: A map/reduce program that counts the matches of a regex in the input.
join: A job that effects a join over sorted, equally partitioned datasets
multifilewc: A job that counts words from several files.
pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
randomwriter: A map/reduce program that writes 10GB of random data per node.
secondarysort: An example defining a secondary sort to the reduce.
sort: A map/reduce program that sorts the data written by the random writer.
sudoku: A sudoku solver.
teragen: Generate data for the terasort
terasort: Run the terasort
teravalidate: Checking results of terasort
wordcount: A map/reduce program that counts the words in the input files.
wordmean: A map/reduce program that counts the average length of the words in the input files.
wordmedian: A map/reduce program that counts the median length of the words in the input files.
wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

我尝试了许多尝试,例如尝试为 Pi赋予完全限定的名称,但我从未获得 pi的值(value)。请提出一些解决方法。
提前致谢。

最佳答案

需要检查与Hadoop-mapreduce 2.7.2兼容的Java版本。

这可能是因为错误的jar文件与Java版本相对应。

关于hadoop - 运行内置 “compute pi” hadoop作业的命令,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50965931/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com