gpt4 book ai didi

hadoop - Partitioner 似乎不能在单个节点上工作?

转载 作者:可可西里 更新时间:2023-11-01 15:29:56 24 4
gpt4 key购买 nike

我已经编写了 map reduce 代码和自定义分区。自定义分区根据某些条件对键进行排序。我在驱动程序类中设置了 setNumReduceTasks=6。但是我正在我的单台机器上测试这段代码。我只得到一个 reducer 输出文件而不是 6 个 reducer 文件。分区器不能在单机上工作吗?是否需要多节点集群才能看到自定义分区器的效果?对此有任何见解将不胜感激。

最佳答案

当你将 reducer 的数量设置为大于 1 时,Partitioner 总是工作,即使它是一个单节点集群。

我已经在单节点集群上测试了以下代码,它按预期工作:

public final class SortMapReduce extends Configured implements Tool {

public static void main(final String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new SortMapReduce(), args);
System.exit(res);
}

public int run(final String[] args) throws Exception {

Path inputPath = new Path(args[0]);
Path outputPath = new Path(args[1]);

Configuration conf = super.getConf();

Job job = Job.getInstance(conf);

job.setJarByClass(SortMapReduce.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(KeyValueTextInputFormat.class);

job.setMapOutputKeyClass(Person.class);
job.setMapOutputValueClass(Text.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);

job.setPartitionerClass(PersonNamePartitioner.class);

job.setNumReduceTasks(5);

FileInputFormat.setInputPaths(job, inputPath);
FileOutputFormat.setOutputPath(job, outputPath);

if (job.waitForCompletion(true)) {
return 0;
}
return 1;
}

public static class Map extends Mapper<Text, Text, Person, Text> {

private Person outputKey = new Person();

@Override
protected void map(Text pointID, Text firstName, Context context) throws IOException, InterruptedException {
outputKey.set(pointID.toString(), firstName.toString());
context.write(outputKey, firstName);
}
}

public static class Reduce extends Reducer<Person, Text, Text, Text> {

Text pointID = new Text();

@Override
public void reduce(Person key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
pointID.set(key.getpointID());
for (Text firstName : values) {
context.write(pointID, firstName);
}
}
}

分区器类:

public class PersonNamePartitioner extends Partitioner<Person, Text> {

@Override
public int getPartition(Person key, Text value, int numPartitions) {

return Math.abs(key.getpointID().hashCode() * 127) % numPartitions;
}

运行命令:

hadoop jar /home/hdfs/SecondarySort.jar org.test.SortMapReduce /demo/data/Customer/acct.txt /demo/data/Customer/output2

谢谢,

关于hadoop - Partitioner 似乎不能在单个节点上工作?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35888692/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com