gpt4 book ai didi

java - 错误 - MapReduce 中的 Hadoop 字数统计程序

转载 作者:行者123 更新时间:2023-12-02 21:40:46 26 4
gpt4 key购买 nike

如果这看起来像个愚蠢的问题,我是 Hadoop 的新手,请原谅我。

我正在运行下面的 MapReduce 程序并收到以下错误:

java.lang.Exception:java.io.IOException:映射中的键类型不匹配:预期的 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable
在 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
原因:java.io.IOException:映射中的键类型不匹配:预期的 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable
在 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)

任何帮助表示赞赏。

公共(public)类字数{

// Mapper Class
public static class MapperClass extends Mapper<Object, Text, Text, IntWritable>{


private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

// Mapper method defined
public void mapperMethod(Object key,Text lineContent,Context context){
try{
StringTokenizer strToken = new StringTokenizer(lineContent.toString());
//Iterating through the line
while(strToken.hasMoreTokens()){
word.set(strToken.nextToken());
try{
context.write(word, one);
}
catch(Exception e){
System.err.println(new Date()+" ---> Cannot write data to hadoop in Mapper.");
e.printStackTrace();
}
}
}
catch(Exception ex){
ex.printStackTrace();
}
}
}
// Reducer Class
public static class ReducerClass extends Reducer<Text, IntWritable, Text, IntWritable>{

private IntWritable result = new IntWritable();

//Reducer method
public void reduce(Text key,Iterable<IntWritable> values,Context context){
try{
int sum=0;
for(IntWritable itr : values){
sum+=itr.get();
}
result.set(sum);
try {
context.write(key,result);
} catch (Exception e) {
System.err.println(new Date()+" ---> Error while sending data to Hadoop in Reducer");
e.printStackTrace();
}

}
catch (Exception err){
err.printStackTrace();

}
}

}


public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
try{
Configuration conf = new Configuration();
String [] arguments = new GenericOptionsParser(conf, args).getRemainingArgs();
if(arguments.length!=2){
System.err.println("Enter both and input and output location.");
System.exit(1);
}
Job job = new Job(conf,"Simple Word Count");

job.setJarByClass(WordCount.class);
job.setMapperClass(MapperClass.class);
job.setReducerClass(ReducerClass.class);


job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);



FileInputFormat.addInputPath(job, new Path(arguments[0]));
FileOutputFormat.setOutputPath(job, new Path(arguments[1]));

System.exit(job.waitForCompletion(true) ? 0 : 1);
}
catch(Exception e){

}
}

}

最佳答案

您需要覆盖 Mapper 类中的 Map 方法,而不是使用新方法。
遇到您的错误,因为您没有覆盖您的程序的 map 方法,所以归结为一个 reduce 唯一的工作。 Reducer 将输入作为 LongWritable,Text 但您已将 Intwritable 和 text 声明为输入。

希望这能解释。

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}

关于java - 错误 - MapReduce 中的 Hadoop 字数统计程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28932315/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com