gpt4 book ai didi

java - Job 类型中的方法 setPartitionerClass(Class) 不适用于参数 (Class)

转载 作者:可可西里 更新时间:2023-11-01 14:49:06 24 4
gpt4 key购买 nike

我的司机代码:

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver extends Configured {

public static void main(String[] args) throws Exception {
Job job = new Job();
job.setJarByClass(WordCountDriver.class);
job.setJobName("wordcountdriver");

FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);

job.setPartitionerClass(WordCountPartitioner.class);
job.setNumReduceTasks(4);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

System.exit(job.waitForCompletion(true) ? 0 : -1);
}
}

我的映射器代码:

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}

reducer 代码:

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for(IntWritable value : values) {
sum += value.get();
}
context.write(key, new IntWritable(sum));
}
}

分区程序代码:

import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Partitioner;

public class WordCountPartitioner implements Partitioner<Text, IntWritable> {

@Override
public void configure(JobConf arg0) {
// TODO Auto-generated method stub
}

@Override
public int getPartition(Text key, IntWritable value, int setNumRedTasks) {
String line = value.toString();

if (line.length() == 1) {
return 0;
}
if (line.length() == 2) {
return 1;
}
if (line.length() == 3) {
return 2;
} else {
return 3;
}
}
}

为什么会出现此错误?

最佳答案

您正在混合旧的 (org.apache.hadoop.mapred) 和新的 (org.apache.hadoop.mapreduce) API。您的 WordCountPartitioner 应该扩展 org.apache.hadoop.mapreduce.Partitioner 类。

关于java - Job 类型中的方法 setPartitionerClass(Class<?extendsPartitioner>) 不适用于参数 (Class<WordCountPartitioner>),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32928301/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com