gpt4 book ai didi

hadoop - 在Cloudera Hadoop中构建MapReduce程序时出错

转载 作者:行者123 更新时间:2023-12-02 21:50:24 25 4
gpt4 key购买 nike

在Hadoop中构建MapReduce文件时出现以下错误。
我正在使用Cloudera hadoop分发。
testmr_classes是一个文件夹,而TestMR.java是MapReduce文件

[cloudera@localhost ~]$ echo `hadoop classpath`
/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
[cloudera@localhost ~]$

[cloudera@localhost ~]$ javac -classpath `hadoop classpath`:. -d testmr_classes TestMR.java
TestMR.java:32: TestMR.Reduce is not abstract and does not override abstract method reduce(org.apache.hadoop.io.IntWritable,java.util.Iterator<org.apache.hadoop.io.Text>,org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.DoubleWritable>,org.apache.hadoop.mapred.Reporter) in org.apache.hadoop.mapred.Reducer
public static class Reduce extends MapReduceBase implements Reducer<IntWritable,Text,IntWritable,DoubleWritable>
^
1 error
[cloudera@localhost ~]$

以下是TestMR.java的内容,
import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.util.*;

public class TestMR
{
public static class Map extends MapReduceBase implements Mapper<IntWritable,Text,IntWritable,Text>
{
private IntWritable key = new IntWritable();
private Text value = new Text();

public void map(IntWritable key, Text line, OutputCollector<IntWritable, Text> output, Reporter reporter) throws IOException
{
String [] split = line.toString().split(",");
key.set(Integer.parseInt(split[0]));

if(split[2] == "Test")
{
value.set(split[4] + "," + split[7]);
output.collect(key, value);
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<IntWritable,Text,IntWritable,DoubleWritable>
{
public void reduce(IntWritable key, Iterable<Text> v, OutputCollector<IntWritable, DoubleWritable> output, Reporter reporter) throws IOException
{
Iterator values = v.iterator();
while(values.hasNext())
{
String [] tmp_buf_1 = values.next().toString().split(",");
String V1 = tmp_buf_1[0];
String T1 = tmp_buf_1[1];

if(!values.hasNext())
break;

String [] tmp_buf_2 = values.next().toString().split(",");
String V2 = tmp_buf_2[0];
String T2 = tmp_buf_2[1];

double dResult = (Double.parseDouble(V2) - Double.parseDouble(V1)) / (Double.parseDouble(T2) - Double.parseDouble(T1));

output.collect(key, new DoubleWritable(dResult));
}
}
}

public static void main(String[] args) throws Exception
{
JobConf conf = new JobConf(TestMR.class);
conf.setJobName("TestMapReduce");

conf.setOutputKeyClass(IntWritable.class);
conf.setOutputValueClass(DoubleWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);
}
}

这是我第一次尝试MapReduce,很高兴知道我是否在这里缺少任何东西。

最佳答案

仔细查看reduce()的第二个参数和错误。您编写了Iterable,但是它为您提供了Iterator。

关于hadoop - 在Cloudera Hadoop中构建MapReduce程序时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21538712/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com