gpt4 book ai didi

Hadoop - WordCount 的结果未写入输出文件

转载 作者:可可西里 更新时间:2023-11-01 14:29:09 25 4
gpt4 key购买 nike

我正在尝试运行一个程序,按照此链接中给出的步骤来计算单词的数量及其频率:http://developer.yahoo.com/hadoop/tutorial/module3.html

我加载了一个名为 input 的目录,其中包含三个文本文件。

我能够正确配置所有内容。现在,在运行 WordCount.java 时,我在输出目录内的 part-00000 文件中看不到任何内容。

Mapper 的 java 代码是:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;

public class WordCountMapper extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {

private final IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(WritableComparable key, Writable value,
OutputCollector output, Reporter reporter) throws IOException {

String line = value.toString();
StringTokenizer itr = new StringTokenizer(line.toLowerCase());
while(itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);
}
}

@Override
public void map(LongWritable arg0, Text arg1,
OutputCollector<Text, IntWritable> arg2, Reporter arg3)
throws IOException {
// TODO Auto-generated method stub

}

}

归约代码为:

public class WordCountReducer extends MapReduceBase
implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator values,
OutputCollector output, Reporter reporter) throws IOException {

int sum = 0;
while (values.hasNext()) {
//System.out.println(values.next());
IntWritable value = (IntWritable) values.next();
sum += value.get(); // process value
}

output.collect(key, new IntWritable(sum));
}
}

Word 计数器的代码是:

public class Counter {

public static void main(String[] args) {
JobClient client = new JobClient();
JobConf conf = new JobConf(com.example.Counter.class);

// TODO: specify output types
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

// TODO: specify input and output DIRECTORIES (not files)
conf.setInputPath(new Path("src"));
conf.setOutputPath(new Path("out"));

// TODO: specify a mapper
conf.setMapperClass(org.apache.hadoop.mapred.lib.IdentityMapper.class);

// TODO: specify a reducer
conf
.setReducerClass(org.apache.hadoop.mapred.lib.IdentityReducer.class);

client.setConf(conf);
try {
JobClient.runJob(conf);
} catch (Exception e) {
e.printStackTrace();
}
}

}

在控制台中我得到这些日志:

13/09/10 10:09:20 WARN mapred.JobClient: Use GenericOptionsParser for parsing the       arguments. Applications should implement Tool for the same.
13/09/10 10:09:20 INFO mapred.FileInputFormat: Total input paths to process : 3
13/09/10 10:09:20 INFO mapred.FileInputFormat: Total input paths to process : 3
13/09/10 10:09:20 INFO mapred.JobClient: Running job: job_201309100855_0012
13/09/10 10:09:21 INFO mapred.JobClient: map 0% reduce 0%
13/09/10 10:09:25 INFO mapred.JobClient: map 25% reduce 0%
13/09/10 10:09:26 INFO mapred.JobClient: map 75% reduce 0%
13/09/10 10:09:27 INFO mapred.JobClient: map 100% reduce 0%
13/09/10 10:09:35 INFO mapred.JobClient: Job complete: job_201309100855_0012
13/09/10 10:09:35 INFO mapred.JobClient: Counters: 15
13/09/10 10:09:35 INFO mapred.JobClient: File Systems
13/09/10 10:09:35 INFO mapred.JobClient: HDFS bytes read=54049
13/09/10 10:09:35 INFO mapred.JobClient: Local bytes read=14
13/09/10 10:09:35 INFO mapred.JobClient: Local bytes written=214
13/09/10 10:09:35 INFO mapred.JobClient: Job Counters
13/09/10 10:09:35 INFO mapred.JobClient: Launched reduce tasks=1
13/09/10 10:09:35 INFO mapred.JobClient: Launched map tasks=4
13/09/10 10:09:35 INFO mapred.JobClient: Data-local map tasks=4
13/09/10 10:09:35 INFO mapred.JobClient: Map-Reduce Framework
13/09/10 10:09:35 INFO mapred.JobClient: Reduce input groups=0
13/09/10 10:09:35 INFO mapred.JobClient: Combine output records=0
13/09/10 10:09:35 INFO mapred.JobClient: Map input records=326
13/09/10 10:09:35 INFO mapred.JobClient: Reduce output records=0
13/09/10 10:09:35 INFO mapred.JobClient: Map output bytes=0
13/09/10 10:09:35 INFO mapred.JobClient: Map input bytes=50752
13/09/10 10:09:35 INFO mapred.JobClient: Combine input records=0
13/09/10 10:09:35 INFO mapred.JobClient: Map output records=0
13/09/10 10:09:35 INFO mapred.JobClient: Reduce input records=0

我是 Hadoop 的新手。

请回复适当的答案。

谢谢。

最佳答案

您的 Mapper 类中有两个 map 方法。带有 @Override 注释的是实际被覆盖的方法,该方法不执行任何操作。所以没有任何东西从您的映射器中出来,也没有任何东西进入缩减器,因此没有输出。

删除用@Override注解标记的map方法,用@Override标记第一个map方法.然后修复任何方法签名问题,它应该可以工作。

关于Hadoop - WordCount 的结果未写入输出文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18710977/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com