gpt4 book ai didi

hadoop + Writable 接口(interface) + readFields 在reducer 中抛出异常

转载 作者:可可西里 更新时间:2023-11-01 16:24:16 27 4
gpt4 key购买 nike

我有一个简单的 map-reduce 程序,其中我的 map 和 reduce 基元看起来像这样

map (K,V)=(文本,OutputAggregator)
减少(文本,OutputAggregator)=(文本,文本)

重要的一点是,从我的 map 函数中,我发出了一个 OutputAggregator 类型的对象,它是我自己的实现 Writable 接口(interface)的类。但是,我的 reduce 失败并出现以下异常。更具体地说,readFieds() 函数抛出异常。任何线索为什么?我使用 hadoop 0.18.3

10/09/19 04:04:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
10/09/19 04:04:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1
10/09/19 04:04:59 INFO mapred.JobClient: Running job: job_local_0001
10/09/19 04:04:59 INFO mapred.MapTask: numReduceTasks: 1
10/09/19 04:04:59 INFO mapred.MapTask: io.sort.mb = 100
10/09/19 04:04:59 INFO mapred.MapTask: data buffer = 79691776/99614720
10/09/19 04:04:59 INFO mapred.MapTask: record buffer = 262144/327680
Length = 10
10
10/09/19 04:04:59 INFO mapred.MapTask: Starting flush of map output
10/09/19 04:04:59 INFO mapred.MapTask: bufstart = 0; bufend = 231; bufvoid = 99614720
10/09/19 04:04:59 INFO mapred.MapTask: kvstart = 0; kvend = 10; length = 327680
gl_books
10/09/19 04:04:59 WARN mapred.LocalJobRunner: job_local_0001
java.lang.NullPointerException
at org.myorg.OutputAggregator.readFields(OutputAggregator.java:46)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:751)
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:691)
at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:770)
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:117)
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:1)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask.java:904)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:785)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:157)
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113)
at org.myorg.xxxParallelizer.main(xxxParallelizer.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)

最佳答案

在发布有关自定义代码的问题时:发布相关代码段。所以第 46 行的内容和前后几行真的很有帮助......:)

但这可能会有所帮助:

编写您自己的可写类时的陷阱是 Hadoop 一遍又一遍地重用该类的实际实例。在对 readFields 的调用之间,您不会获得 Shiny 的新实例。

因此在 readFields 方法开始时,您必须假设您所在的对象充满了“垃圾”并且必须在继续之前清除。

我对您的建议是实现一个“clear()”方法,该方法完全删除当前实例并将其重置为创建它和构造函数完成后的状态。当然,您将该方法作为键和值的 readFields 中的第一件事调用。

HTH

关于hadoop + Writable 接口(interface) + readFields 在reducer 中抛出异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/3746581/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com