gpt4 book ai didi

Hadoop WordCount 为所有单词提供 0 个计数

转载 作者:可可西里 更新时间:2023-11-01 15:26:08 25 4
gpt4 key购买 nike

我在使用 hadoop 中的 WordCount 程序时遇到了问题。字数不正确,所有字都显示为 0,但输出中存在所有不同的字。

这是我的示例数据,已加载到 hdfs 中

# filename: file01.txt
Hello World Bye World

# filename: file02.txt
Hello Hadoop Bye Hadoop

这是来源:

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapred.*;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.io.*;


public class WordCount {
public static class Map
extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {


private final static IntWritable one = new IntWritable();
private Text word = new Text();

public void map(LongWritable longWritable, Text value,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {

String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}


public static class Reduce
extends MapReduceBase
implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {

int sum = 0;
while(values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

public static void main(String[] args) throws IOException {

JobConf jobConf = new JobConf(WordCount.class);
jobConf.setJobName("wordcount");

jobConf.setOutputKeyClass(Text.class);
jobConf.setOutputValueClass(IntWritable.class);

jobConf.setCombinerClass(WordCount.Reduce.class);
jobConf.setReducerClass(WordCount.Reduce.class);
jobConf.setMapperClass(WordCount.Map.class);

jobConf.setInputFormat(TextInputFormat.class);
jobConf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(jobConf, new Path(args[0]));
FileOutputFormat.setOutputPath(jobConf, new Path(args[1]));

JobClient.runJob(jobConf);
}
}

当我运行 jar 时,输出文件在输出文件夹中生成,但显示如下:

$ bin/hdfs dfs -cat ./output/part-00000
17/11/09 02:50:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Bye 0
Hadoop 0
Hello 0
World 0

如您所见,所有计数均为零,但我找不到在我的实现中哪里出错了。

最佳答案

是的,我已经尝试调试您的代码,错误出在您的 Map 类中

 public static class Map
extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {


private final static IntWritable one = new IntWritable();
private Text word = new Text();

public void map(LongWritable longWritable, Text value,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {

String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}

As your Mapper class was returning null(0) as Value ,so reducer was not able to reduce the value

  • 因此初始化值 1,以便它为每个单词返回值 1。

这是代码

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable();
private Text word = new Text();

public void map(LongWritable longWritable, Text value, OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException {

String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
one.set(1);

output.collect(word, one);
}
}

它会起作用....

关于Hadoop WordCount 为所有单词提供 0 个计数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47196791/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com