gpt4 book ai didi

hadoop - 运行 Map Reduce 作业时获取 ClassCastException

转载 作者:可可西里 更新时间:2023-11-01 16:48:17 26 4
gpt4 key购买 nike

我有一个 map reduce 作业,它从 Accumulo 表中获取数据,执行操作,然后将结果存储在另一个 Accumulo 表中。我有以下映射器、组合器和缩减器。

class PivotTableMapper extends Mapper<Key, Value, Text, Text> {
@Override
public void map(Key k, Value v, Context context) {
// Doing something here...
context.write(Text Text);
}
}

class PivotTableCombiner extends Reducer<Text, Text, Text, Text> {
@Override
public void reduce(Text k, Iterable<Text> v, Context context) {
// Doing something here....
context.write(Text, Text);
}
}

class PivotTableReducer extends Reducer<Text, Text, Text, Mutation> {
@Override
public void reduce(Text k, Iterable<Text> v, Context context) {
// Doing something here....
context.write(null, Mutation);
}
}

@Override
public int run(String[] args) {
Job job = Job.getInstance(conf);
job.setInputFormatClass(AccumuloInputFormat.class);
job.setOutputFormatClass(AccumuloOutputFormat.class);
// Some additional settings
}

当我运行作业时,我得到一个 ClassCastException

Error: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.accumulo.core.data.Mutation
at org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:409)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:48)
at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/01/18 16:16:27 INFO mapreduce.Job: map 33% reduce 0%
16/01/18 16:16:27 INFO mapreduce.Job: Task Id : attempt_1453096833928_0021_m_000001_1, Status : FAILED
Error: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.accumulo.core.data.Mutation
at org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:409)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:48)
at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

有人能告诉我我做错了什么吗?类组合不正确吗?

最佳答案

通常,在使用 AccumuloOutputFormat 时不需要 Reducer(因为 Accumulo 本身就像 Reducer 一样)。您的 Mapper 将获取并输出 .

对于您的具体情况,您的映射器需要编写,然后由您的实际 reducer 进行排序/减少。更改映射器的输出键和值参数化。

关于hadoop - 运行 Map Reduce 作业时获取 ClassCastException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34850508/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com