gpt4 book ai didi

java - Hadoop MapReduce 新手,在 mapred.Reducer.() 上获取 NoSuchMethodException

转载 作者:可可西里 更新时间:2023-11-01 14:25:08 25 4
gpt4 key购买 nike

解决方案:使用更好的教程- http://hadoop.apache.org/mapreduce/docs/r0.22.0/mapred_tutorial.html

我刚开始使用 MapReduce,遇到了一个我无法通过 Google 解决的奇怪错误。我正在制作一个基本的 WordCount 程序,但是当我运行它时,在 Reduce 期间出现以下错误:

java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.mapred.Reducer.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:485)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

WordCount 程序是 Apache MapReduce 教程中的程序。我在 Mountain Lion 上以伪分布式模式运行 Hadoop 1.0.3,我认为所有这些都运行良好,因为示例都在正常执行。有什么想法吗?

编辑:这是我的引用代码:

package mrt;

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
throws IOException{

String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreTokens()){
word.set(tokenizer.nextToken());
output.collect(word,one);
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException{

int sum = 0;

while(values.hasNext()){
sum += values.next().get();
}

output.collect(key, new IntWritable(sum));

}
}

public static void main(String[] args) throws Exception{

JobConf conf = new JobConf(WordCount.class);
conf.setJobName("Wordcount");

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reducer.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);

}
}

最佳答案

问题不在于您选择的 API。完全支持稳定的 (mapred.*) 和不断发展的 (mapreduce.*) API,并且框架本身对两者进行测试,以确保不会出现跨版本的回归/破坏。

问题是这一行:

conf.setReducerClass(Reducer.class);

当您应该设置 Reducer 接口(interface)的实现时,您正在将 Reducer 接口(interface)设置为 Reducer。将其更改为:

conf.setReducerClass(Reduce.class);

会修复它。

关于java - Hadoop MapReduce 新手,在 mapred.Reducer.<init>() 上获取 NoSuchMethodException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11961517/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com