gpt4 book ai didi

Java Spark : Spark Bug Workaround for Datasets Joining with unknow Join Column Names

转载 作者:行者123 更新时间:2023-11-30 07:45:35 44 4
gpt4 key购买 nike

我在 Java 中使用 Spark 2.3.1。

我遇到的(我认为)是this known bug of Spark .

这是我的代码:

public Dataset<Row> compute(Dataset<Row> df1, Dataset<Row> df2, List<String> columns){
Seq<String> columns_seq = JavaConverters.asScalaIteratorConverter(columns.iterator()).asScala().toSeq();

final Dataset<Row> join = df1.join(df2, columns_seq);

join.show()

join.withColumn("newColumn", abs(col("value1").minus(col("value2")))).show();

return join;
}

我这样调用我的代码:

Dataset<Row> myNewDF = compute(MyDataset1, MyDataset2, Arrays.asList("field1","field2","field3","field4"));

注意:MyDataset1 和 MyDataset2 是来自同一数据集 MyDataset0 的两个数据集,具有多个不同的转换。

join.show() 行,我收到以下错误:

2018-08-03 18:48:43 - ERROR main Logging$class -  -  - failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 235, Column 21: Expression "project_isNull_2" is not an rvalue
org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 235, Column 21: Expression "project_isNull_2" is not an rvalue
at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11821)
at org.codehaus.janino.UnitCompiler.toRvalueOrCompileException(UnitCompiler.java:7170)
at org.codehaus.janino.UnitCompiler.getConstantValue2(UnitCompiler.java:5332)
at org.codehaus.janino.UnitCompiler.access$9400(UnitCompiler.java:212)
at org.codehaus.janino.UnitCompiler$13$1.visitAmbiguousName(UnitCompiler.java:5287)
at org.codehaus.janino.Java$AmbiguousName.accept(Java.java:4053)
...

2018-08-03 18:48:47 - WARN main Logging$class - - - Whole-stage codegen disabled for plan (id=7):

但它并没有停止执行,仍然显示数据集的内容。

然后,在行 join.withColumn("newColumn", abs(col("value1").minus(col("value2")))).show();

我得到错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Resolved attribute(s) 'value2,'value1 missing from field6#16,field7#3,field8#108,field5#0,field9#4,field10#28,field11#323,value1#298,field12#131,day#52,field3#119,value2#22,field2#35,field1#43,field4#144 in operator 'Project [field1#43, field2#35, field3#119, field4#144, field5#0, field6#16, value2#22, field7#3, field9#4, field10#28, day#52, field8#108, field12#131, value1#298, field11#323, abs(('value1 - 'value2)) AS newColumn#2579]. Attribute(s) with the same name appear in the operation: value2,value1. Please check if the right attribute(s) are used.;;
'Project [field1#43, field2#35, field3#119, field4#144, field5#0, field6#16, value2#22, field7#3, field9#4, field10#28, day#52, field8#108, field12#131, value1#298, field11#323, abs(('value1 - 'value2)) AS newColumn#2579]
+- AnalysisBarrier
...

这个错误结束程序。

Mijung Kim 在 Jira Issue 上提出的解决方法是借助 DF(Columns) 创建一个数据集克隆。但就我而言,如果用于连接的列名事先未知(我只有一个列表),我无法使用此解决方法。

还有其他方法可以解决这个非常烦人的错误吗?

最佳答案

尝试调用这个方法:

private static Dataset<Row> cloneDataset(Dataset<Row> ds) {
List<Column> filterColumns = new ArrayList<>();
List<String> filterColumnsNames = new ArrayList<>();
scala.collection.Iterator<StructField> it = ds.exprEnc().schema().toIterator();
while (it.hasNext()) {
String columnName = it.next().name();
filterColumns.add(ds.col(columnName));
filterColumnsNames.add(columnName);
}
ds = ds.select(JavaConversions.asScalaBuffer(filterColumns).seq()).toDF(scala.collection.JavaConverters.asScalaIteratorConverter(filterColumnsNames.iterator()).asScala().toSeq());
return ds;
}

在加入之前的两个数据集上:

df1 = cloneDataset(df1);
df2 = cloneDataset(df2);
final Dataset<Row> join = df1.join(df2, columns_seq);
// or ( based on Nakeuh comment )
final Dataset<Row> join = cloneDataset(df1.join(df2, columns_seq));

关于Java Spark : Spark Bug Workaround for Datasets Joining with unknow Join Column Names,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51676083/

44 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com