gpt4 book ai didi

java - hadoop 中用于多个 double 值的自定义可写类

转载 作者:可可西里 更新时间:2023-11-01 15:00:35 26 4
gpt4 key购买 nike

我正在尝试发出 4 个数值作为键。我为此编写了自定义可写 Comparable 类,但我坚持使用 compare() 方法stackoverflow 站点中提到了几种解决方案。但这并没有解决我的问题。

我的 writableCoparable 类是

public class DimensionWritable implements WritableComparable {
private double keyRow;
private double keyCol;

private double valRow;
private double valCol;


public DimensionWritable(double keyRow, double keyCol,double valRow, double valCol) {
set(keyRow, keyCol,valRow,valCol);
}
public void set(double keyRow, double keyCol,double valRow, double valCol) {
//row dimension
this.keyRow = keyRow;
this.keyCol = keyCol;
//column dimension
this.valRow = valRow;
this.valCol = valCol;
}

@Override
public void write(DataOutput out) throws IOException {
out.writeDouble(keyRow);
out.writeDouble(keyCol);

out.writeDouble(valRow);
out.writeDouble(valCol);
}
@Override
public void readFields(DataInput in) throws IOException {
keyRow = in.readDouble();
keyCol = in.readDouble();

valRow = in.readDouble();
valCol = in.readDouble();
}
/**
* @return the keyRow
*/
public double getKeyRow() {
return keyRow;
}
/**
* @param keyRow the keyRow to set
*/
public void setKeyRow(double keyRow) {
this.keyRow = keyRow;
}
/**
* @return the keyCol
*/
public double getKeyCol() {
return keyCol;
}
/**
* @param keyCol the keyCol to set
*/
public void setKeyCol(double keyCol) {
this.keyCol = keyCol;
}
/**
* @return the valRow
*/
public double getValRow() {
return valRow;
}
/**
* @param valRow the valRow to set
*/
public void setValRow(double valRow) {
this.valRow = valRow;
}
/**
* @return the valCol
*/
public double getValCol() {
return valCol;
}
/**
* @param valCol the valCol to set
*/
public void setValCol(double valCol) {
this.valCol = valCol;
}

//compare - confusing

}

确切地说,比较语句背后的逻辑是什么——它是 Hadoop 中的 key 交换,对吧?

如何实现上述 4 个 double 值。

更新我将代码编辑为“isnot2bad”但显示

java.lang.Exception: java.lang.RuntimeException: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:404)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
at org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:113)
at org.apache.hadoop.io.WritableComparator.<init>(WritableComparator.java:99)
at org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:55)
at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:819)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:836)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:376)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:85)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:584)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:656)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.NoSuchMethodException: edu.am.bigdata.svmmodel.DimensionWritable.<init>()
at java.lang.Class.getConstructor0(Class.java:2721)
at java.lang.Class.getDeclaredConstructor(Class.java:2002)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:122)

我做错了什么吗?

最佳答案

如果你想在 Hadoop 中使用你的类型作为键,它必须是可比较的,(你的类型必须是 totally ordered ),即两个实例 abDimensionWritable必须相等,或者 a必须大于或小于 b (这意味着什么取决于实现)。

通过实现 compareTo你定义实例如何成为naturally compared to each other .这是通过比较要比较的实例的字段来完成的:

public int compareTo(DimensionWritable o) { 
int c = Double.compare(this.keyRow, o.keyRow);
if (c != 0) return c;
c = Double.compare(this.keyCol, o.keyCol);
if (c != 0) return c;
c = Double.compare(this.valRow, o.valRow);
if (c != 0) return c;
c = Double.compare(this.valCol, o.valCol);
return c;
}

请注意,hashCode还必须实现,因为它必须符合您对相等性的定义(根据 compareTo 认为相等的两个实例应具有相同的哈希码),并且因为 Hadoop 要求键的哈希码为 constant across different JVMs .所以我们再次使用这些字段来计算哈希码:

public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + Double.hashCode(keyRow);
result = prime * result + Double.hashCode(keyCol);
result = prime * result + Double.hashCode(valRow);
result = prime * result + Double.hashCode(valCol);
return result;
}

关于java - hadoop 中用于多个 double 值的自定义可写类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24887716/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com