gpt4 book ai didi

java - hadoop异常实例化OutputKey

转载 作者:可可西里 更新时间:2023-11-01 15:37:10 26 4
gpt4 key购买 nike

当我尝试在 map reduce 作业中使用我自己的 key 时出现异常。它似乎找不到我的 key 的默认构造函数,即使我已经指定了它。我发现了一个相关问题 ( No such method exception Hadoop <init> ),但该解决方案并没有真正帮助我。

(注意:我使用的是 hadoop 2.2.0。)

异常:

java.lang.Exception: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.io.WritableComparable.<init>()
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.io.WritableComparable.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:115)
at org.apache.hadoop.io.WritableComparator.<init>(WritableComparator.java:101)
at org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:55)
at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:885)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:982)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:390)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:79)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:674)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:746)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NoSuchMethodException: org.apache.hadoop.io.WritableComparable.<init>()
at java.lang.Class.getConstructor0(Class.java:2810)
at java.lang.Class.getDeclaredConstructor(Class.java:2053)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
... 16 more

关键类:

import org.apache.hadoop.io.Writable;
import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.UUID;

public class WritableUUID
implements Writable, Cloneable, WritableComparable<WritableUUID> {

private UUID uuid;

public WritableUUID() {

}

public WritableUUID(UUID uuid) {
this.uuid = uuid;
}

public UUID getUuid() {
return uuid;
}

@Override
public int compareTo(WritableUUID o) {
return uuid.compareTo(o.uuid);
}

@Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(uuid.getLeastSignificantBits());
dataOutput.writeLong(uuid.getMostSignificantBits());
}

@Override
public void readFields(DataInput dataInput) throws IOException {
long lsb = dataInput.readLong();
long msb = dataInput.readLong();

this.uuid = new UUID(msb, lsb);
}
}

感谢您的帮助!

最佳答案

我发现了问题。这不是 hadoop 问题,而是我对专有库的一些 API 混淆。

关于java - hadoop异常实例化OutputKey,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23399517/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com