gpt4 book ai didi

java - 如何修复NoSuchMethodError:org.apache.hadoop.mapred.InputSplit.write

转载 作者:行者123 更新时间:2023-12-02 20:45:56 25 4
gpt4 key购买 nike

我在hadoop上写一个项目。我有一个一维字符串数组。它的名称是“words”。

想要将其发送到 reducer ,但出现此错误:

Exception in thread "main" java.lang.NoSuchMethodError:org.apache.hadoop.mapred .InputSplit.write(Ljava/io/DataOutput;)V

我该怎么办?

谁能帮我?

这是我的映射器:
 public  abstract  class Mapn  implements Mapper<LongWritable, Text, Text, Text>{
@SuppressWarnings("unchecked")
public void map(LongWritable key, Text value, Context con) throws IOException, InterruptedException

{
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
Text outputKey = new Text(word.toUpperCase().trim());

con.write(outputKey, words);
}
}




}

最佳答案

当我学习hadoop mapreduce工具时,除了编写传统的WordCount程序外,我还编写了自己的程序,然后为此导出了jar。现在好了,我正在共享使用hadoop-1.2.1 jar依赖关系为其编写的程序。它用于转换数字并将其写在单词中,并且在4个lacs数字上进行了处理,没有任何错误。

所以这是程序:

package com.whodesire.count;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

import com.whodesire.numstats.AmtInWords;

public class CountInWords {

public static class NumberTokenizerMapper
extends Mapper <Object, Text, LongWritable, Text> {

private static final Text theOne = new Text("1");
private LongWritable longWord = new LongWritable();

public void map(Object key, Text value, Context context) {

try{
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
longWord.set(Long.parseLong(itr.nextToken()));
context.write(longWord, theOne);
}
}catch(ClassCastException cce){
System.out.println("ClassCastException raiseddd...");
System.exit(0);
}catch(IOException | InterruptedException ioe){
ioe.printStackTrace();
System.out.println("IOException | InterruptedException raiseddd...");
System.exit(0);
}
}
}

public static class ModeReducerCumInWordsCounter
extends Reducer <LongWritable, Text, LongWritable, Text>{
private Text result = new Text();

//This is the user defined reducer function which is invoked for each unique key
public void reduce(LongWritable key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {

/*** Putting the key, which is a LongWritable value,
putting in AmtInWords constructor as String***/
AmtInWords aiw = new AmtInWords(key.toString());
result.set(aiw.getInWords());

//Finally the word and counting is sent to Hadoop MR and thus to target
context.write(key, result);
}
}

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

/****
*** all random numbers generated inside input files has been
*** generated using url https://andrew.hedges.name/experiments/random/
****/

//Load the configuration files and add them to the the conf object
Configuration conf = new Configuration();

String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

Job job = new Job(conf, "CountInWords");

//Specify the jar which contains the required classes for the job to run.
job.setJarByClass(CountInWords.class);

job.setMapperClass(NumberTokenizerMapper.class);
job.setCombinerClass(ModeReducerCumInWordsCounter.class);
job.setReducerClass(ModeReducerCumInWordsCounter.class);

//Set the output key and the value class for the entire job
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);

//Set the Input (format and location) and similarly for the output also
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

//Setting the Results to Single Target File
job.setNumReduceTasks(1);

//Submit the job and wait for it to complete
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

我建议您检查已添加的hadoop jar,特别是在hadoop-core-x.x.x.jar上,因为在观看了错误之后,您似乎还没有在项目中添加一些mapreduce jar。

关于java - 如何修复NoSuchMethodError:org.apache.hadoop.mapred.InputSplit.write,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48043029/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com