gpt4 book ai didi

java - org.apache.hadoop.io.Text无法转换为org.apache.hadoop.io.NullWritable

转载 作者:行者123 更新时间:2023-12-02 21:38:17 25 4
gpt4 key购买 nike

我想将序列文件转换为MapReduce中的ORC文件。
键/值的输入类型为文本/文本。

我的程序看起来像

public class ANR extends Configured implements Tool{


public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub

int res = ToolRunner.run(new Configuration(),new ANR(), args);
System.exit(res);
}

public int run(String[] args) throws Exception {
Logger log = Logger.getLogger(ANRmap.class.getName());
Configuration conf = getConf();
Job job;

String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

conf.set("orc.create.index", "true");


job = Job.getInstance(conf);

/////

job.setJobName("ORC Output");
job.setJarByClass(ANR.class);
job.setInputFormatClass(SequenceFileInputFormat.class);
SequenceFileInputFormat.addInputPath(job, new Path(args[0]));
job.setMapperClass(ANRmap.class);
job.setNumReduceTasks(0);
job.setOutputFormatClass(OrcNewOutputFormat.class);
OrcNewOutputFormat.setCompressOutput(job,true);

OrcNewOutputFormat.setOutputPath(job,new Path(args[1]));

return job.waitForCompletion(true) ? 0: 1;
}

映射器
    public class ANRmap extends Mapper<Text,Text,NullWritable,Writable> {
private final OrcSerde serde = new OrcSerde();

public void map(Text key, Text value,
OutputCollector<NullWritable, Writable> output)
throws IOException {
output.collect(NullWritable.get(),serde.serialize(value, null));
}
}

这是异常(exception)
Error: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.NullWritable
at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter.write(OrcNewOutputFormat.java:37)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

OrcNewOutputFormat中的输出键为NullWritable。如何将Text转换为NullWritable或以其他方式解决此异常?

最佳答案

尝试使用 Context 而不是 OutputCollector

public class ReduceTask extends Reducer<Text,Text, Text, NullWritable>{

public void reduce(Text key,Iterable<Text> values,Context context){

for(Text value:values){
try {
context.write(key,NullWritable.get());
} catch (IOException e) {

e.printStackTrace();
} catch (InterruptedException e) {

e.printStackTrace();
}
}

}
}

关于java - org.apache.hadoop.io.Text无法转换为org.apache.hadoop.io.NullWritable,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30589073/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com