gpt4 book ai didi

java - 尝试通过导入所有 JAR 文件在 eclipse 中运行 "Word Count"程序时 getCredentials 方法错误

转载 作者:可可西里 更新时间:2023-11-01 16:15:50 27 4
gpt4 key购买 nike

错误:线程“主”java.lang.NoSuchMethodError 中的异常:org.apache.hadoop.security.UserGroupInformation.getCredentials()Lorg/apache/hadoop/security/Credentials; 在 org.apache.hadoop.mapreduce.Job.(Job.java:135) 在 org.apache.hadoop.mapreduce.Job.getInstance(Job.java:176) 在 org.apache.hadoop.mapreduce.Job.getInstance(Job.java:195) 在 WordCount.main(WordCount.java:20)

Hadoop 版本 2.2.0

WordCount.java

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("usage: [input] [output]");
System.exit(-1);
}


Job job = Job.getInstance(new Configuration(), "word count");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(WordMapper.class);
job.setReducerClass(SumReducer.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setJarByClass(WordCount.class);
job.setJobName("WordCount");

job.submit();






}
}

WordMapper.java

import java.io.IOException;    
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordMapper extends Mapper<Object, Text, Text, IntWritable> {
private Text word = new Text();
private final static IntWritable one = new IntWritable(1);
@Override
public void map(Object key, Text value,
Context contex) throws IOException, InterruptedException {
// Break line into words for processing
StringTokenizer wordList = new StringTokenizer(value.toString());
while (wordList.hasMoreTokens()) {
word.set(wordList.nextToken());
contex.write(word, one);
}
}
}

SumReducer.java

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;



public class SumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

private IntWritable totalWordCount = new IntWritable();

@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int wordCount = 0;
Iterator<IntWritable> it=values.iterator();
while (it.hasNext()) {
wordCount += it.next().get();
}
totalWordCount.set(wordCount);
context.write(key, totalWordCount);
}
}

请告诉我可以做什么?该程序使用最新的 mapreduce API。 hadoop 2.2.0自带的jar包也全部导入eclipse。

谢谢:)

最佳答案

您是否在使用 Hadoop 的 Eclipse 插件?如果不是,那就是问题所在。如果没有插件,Eclipse 如果只是运行 WordCount 类而 Hadoop 无法找到必要的 jar。捆绑所有 jar,包括 WordCount 并在 Cluster 中运行它。

如果你想从 Eclipse 运行它,你需要 Eclipse 插件。如果您没有,可以按照此 instructions 构建插件。

关于java - 尝试通过导入所有 JAR 文件在 eclipse 中运行 "Word Count"程序时 getCredentials 方法错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21365929/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com