gpt4 book ai didi

java - HADOOP - 无法初始化 MapOutputCollector org.apache.hadoop.mapred.MapTask$MapOutputBuffer java.lang.ClassCastException : class java. lang.Double

转载 作者:行者123 更新时间:2023-12-01 22:12:38 24 4
gpt4 key购买 nike

我的代码有问题,这是我的错误:

Unable to initialize MapOutputCollector org.apache.hadoop.mapred.MapTask$MapOutputBuffer java.lang.ClassCastException: class java.lang.Double

我不知道从哪里来。这是我设置所有作业的类代码:

        conf.set("stripped", stripped);

/* Creating the job object for the Hadoop processing */
@SuppressWarnings("deprecation")
Job job = new Job(conf, "calculate error map reduce");

/* Creating Filesystem object with the configuration */
FileSystem fs = FileSystem.get(conf);

/* Check if output path (args[1])exist or not */
if (fs.exists(new Path(output))) {
/* If exist delete the output path */
fs.delete(new Path(output), true);
}
// Setting Driver class
job.setJarByClass(StrippedPartition.class);

// Setting the Mapper class
job.setMapperClass(MapperCalculateError.class);

// Setting the Reducer class
job.setReducerClass(ReduceCalculateError.class);

// Setting the Output Key class per il mapper
job.setOutputKeyClass(Double.class);
// Setting the Output value class per il mapper
job.setOutputValueClass(DoubleWritable.class);

这是我的映射器类:

    public static class MapperCalculateError extends Mapper<Object, Text, Double, DoubleWritable>{

private final static DoubleWritable error1 = new DoubleWritable(1.0);
private double error,max;
private ObjectBigArrayBigList<LongBigArrayBigList> Contain = new ObjectBigArrayBigList<LongBigArrayBigList>();
private ObjectBigArrayBigList<LongBigArrayBigList> Stripped = new ObjectBigArrayBigList<LongBigArrayBigList>();


public void map(Object key, Text value, Context context) throws IOException, InterruptedException {

Configuration conf = context.getConfiguration();
String stripped = conf.get("stripped");
Stripped = new Gson().fromJson(stripped.toString(), ObjectBigArrayBigList.class);



StringTokenizer itr = new StringTokenizer(value.toString());
Contain = new Gson().fromJson(value.toString(), ObjectBigArrayBigList.class);

//stuff in map function, i avoid in this exeple because is not important
}
context.write(max,error1);

}

这是我的reduce类:


public static class ReduceCalculateError extends Reducer<Double, DoubleWritable, Double, Double>{

private double massimo=0;
private double errore=0;

//public ReduceCalculateError() {}

public void reduce(double max, Iterable<DoubleWritable> error, Context context) throws IOException, InterruptedException {
Configuration conf = context.getConfiguration();
double sum=0;


//other stuff that i avoid

context.write(this.massimo,sum);


}

我不知道错误在哪里,map和reduce从未运行,因为它显示map:0%reduce:0%

最佳答案

凡是有 Double 的地方,都需要使用 DoubleWritable。这是因为 Hadoop 不知道如何序列化 Double,但知道如何序列化 DoubleWritable

任何时候执行context.write(...)时,您都需要确保两个参数都是可写。例如,您的 map 输出是 context.write(max,error1);maxDouble,而它应该是 >可双重写入

关于java - HADOOP - 无法初始化 MapOutputCollector org.apache.hadoop.mapred.MapTask$MapOutputBuffer java.lang.ClassCastException : class java. lang.Double,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58646063/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com