gpt4 book ai didi

java - 来自映射 : expected org. apache.hadoop.io.IntWritable 的键类型不匹配,收到 org.apache.hadoop.io.LongWritable

转载 作者:行者123 更新时间:2023-12-02 20:36:56 25 4
gpt4 key购买 nike

已经检查过了,我不明白为什么会遇到这个错误。

映射器

public class movieMapper extends Mapper<LongWritable, Text, IntWritable, Text> {

public void map(LongWritable key, Text value, Context context ) throws IOException,InterruptedException {

String token[]= value.toString().trim().split("::");

int movieID=Integer.parseInt(token[0].trim());

context.write(new IntWritable(movieID), new Text(token[1].trim()));

}

}

reducer
public class joinReducer extends Reducer<IntWritable, Text, Text, Text> {

public void reduce(IntWritable key, Iterable<Text> values, Context context) throws IOException,InterruptedException {
float avgRating=0.0f;
int tokenCount = 0;
float ratingSum=0.0f;
int count=0;

String movieName="";

for(Text val:values) {
tokenCount+=1;
}

//If we have more than 40 views/ratings
if(tokenCount-1>40) {

for(Text val:values) {

String temp = val.toString();


if(val.equals("1")||val.equals("2")||val.equals("3")||val.equals("4")||val.equals("5")) {

float tempRating= Float.parseFloat(val.toString().trim());
ratingSum += tempRating;
count++;


}

else {

movieName=val.toString().trim();
}

}

avgRating = ratingSum/ (float)count;

context.write(new Text(Float.toString(avgRating)), new Text(movieName));
}

}

}

驱动配置
Configuration conf= new Configuration();
String parameter[]= new GenericOptionsParser(conf,args).getRemainingArgs();

if(parameter.length!=3) {

System.err.println("Three arguments needed <File1> <File2> <Out>");
System.exit(2);
}


//set Driver class

Job job1 = Job.getInstance(conf, "Join");
job1.setJarByClass(MyDriver.class);
job1.setReducerClass(joinReducer.class);

MultipleInputs.addInputPath(job1, new Path(parameter[0]), TextInputFormat.class, movieMapper.class);
MultipleInputs.addInputPath(job1, new Path(parameter[1]), TextInputFormat.class, ratingMapper.class);

job1.setMapOutputKeyClass(IntWritable.class);
job1.setMapOutputValueClass(Text.class);


job1.setOutputKeyClass(Text.class);
job1.setOutputValueClass(Text.class);


FileOutputFormat.setOutputPath(job1, new Path(parameter[2] + "/temp"));

job1.waitForCompletion(true);

18/06/13 09:47:20 INFO mapreduce.Job: Job job_1528823320386_0018 running in uber mode : false 18/06/13 09:47:20 INFO mapreduce.Job: map 0% reduce 0% 18/06/13 09:47:24 INFO mapreduce.Job: Task Id : attempt_1528823320386_0018_m_000000_0, Status : FAILED Error: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.IntWritable, received org.apache.hadoop.io.LongWritable at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1069) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:712) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapreduce.lib.input.DelegatingMapper.run(DelegatingMapper.java:55) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

18/06/13 09:47:25 INFO mapreduce.Job: map 50% reduce 0% 18/06/13 09:47:29 INFO mapreduce.Job: Task Id : attempt_1528823320386_0018_m_000000_1, Status : FAILED Error: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.IntWritable, received org.apache.hadoop.io.LongWritable at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1069) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:712) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapreduce.lib.input.DelegatingMapper.run(DelegatingMapper.java:55) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

最佳答案

在这个作业中运行了两个映射器,movieMapper 和 ratingMapper。 ratingMapper 在函数声明中的关键字拼写错误,并且映射函数的名称“map”被错误地写为“reduce”。

根据 out 配置,reducer 应该接受 IntWritable 类型的 key ,但正在获取 LongWritable,因此出现错误。 (TextInputFormat 生成 LongWritable 类型的键和 Text 类型的值)

关于java - 来自映射 : expected org. apache.hadoop.io.IntWritable 的键类型不匹配,收到 org.apache.hadoop.io.LongWritable,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50829236/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com