gpt4 book ai didi

scala - 如何更改 Apache SparkSQL `Project` 运算符中的属性顺序?

转载 作者:行者123 更新时间:2023-12-02 04:12:00 25 4
gpt4 key购买 nike

这是 Catalyst 特定的问题

在应用我的规则之前,请参阅下面的 queryExecution.optimizedPlan。

01 Project [x#9, p#10, q#11, if (isnull(q#11)) null else UDF(q#11) AS udfB_10#28, if (isnull(p#10)) null else UDF(p#10) AS udfA_99#93]
02 +- InMemoryRelation [x#9, p#10, q#11], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
03 : +- *SerializeFromObject [assertnotnull(input[0, eic.R0, true], top level non-flat input object).x AS x#9, unwrapoption(IntegerType, assertnotnull(input[0, eic.R0, true], top level non-flat input object).p) AS p#10, unwrapoption(IntegerType, assertnotnull(input[0, eic.R0, true], top level non-flat input object).q) AS q#11]
04 : +- *MapElements <function1>, obj#8: eic.R0
05 : +- *DeserializeToObject newInstance(class java.lang.Long), obj#7: java.lang.Long
05 : +- *Range (0, 3, step=1, splits=Some(2))

在第 01 行中,我需要这样交换 udfA 和 udfB 的位置:

01 Project [x#9, p#10, q#11, if (isnull(p#10)) null else UDF(p#10) AS udfA_99#93, if (isnull(q#11)) null else UDF(q#11) AS udfB_10#28]

当我尝试通过 Catalyst 优化更改 SparkSQL 中投影操作中的属性顺序时,查询结果被修改为无效值。也许我没有做所有需要做的事情。我只是更改 fields 参数中 NamedExpression 对象的顺序:

object ReorderColumnsOnProjectOptimizationRule extends Rule[LogicalPlan] {

def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators {

case Project(fields: Seq[NamedExpression], child) =>
if (checkCondition(fields)) Project(newFieldsObject(fields), child) else Project(fields, child)

case _ => plan

}

private def newFieldsObject(fields: Seq[NamedExpression]): Seq[NamedExpression] = {
// compare UDFs computation cost and return the new NamedExpression list
. . .
}

private def checkCondition(fields: Seq[NamedExpression]): Boolean = {
// compare UDFs computation cost and return Boolean for decision off change order on field list.
. . .
}
. . .
}

注意:我在 extraOptimizations SparkSQL 对象上添加我的规则:

spark.experimental.extraOptimizations = Seq(ReorderColumnsOnProjectOptimizationRule)

任何建议都会有很大帮助。

编辑 1

顺便说一下,我在 Databricks 上创建了一个笔记本用于测试目的。 See this link for more detail

对第 60 行进行注释,调用优化并发生错误。

. . .
58 // Do UDF with less cost before, so I need change the fields order
59 myPriorityList.size == 2 && myPriorityList(0) > myPriorityList(1)
60 false
61 }

我错过了什么?

编辑2

考虑以下来自编译器优化的代码,它几乎是类似的:

if ( really_slow_test(with,plenty,of,parameters)
&& slower_test(with,some,parameters)
&& fast_test // with no parameters
)
{
...then code...
}

此代码首先计算一个昂贵的函数,然后在成功后继续计算表达式的其余部分。但即使第一个测试失败并且评估是快捷的,也会有显着的性能损失,因为总是评估胖的 real_slow_test(...) 。在保持程序正确性的同时,可以按如下方式重新排列表达式:

if ( fast_test
&& slower_test(with,some,parameters)
&& (really_slow_test(with,plenty,of,parameters))
{
...then code...
}

我的目标是首先运行最快的 UDF

最佳答案

stefanobaghino说分析器的模式在分析后被缓存,优化器不应该改变它。

如果您使用 Spark 2.2,您可以利用 SPARK-18127并在分析器中应用规则。

如果您运行这个虚拟应用程序

package panos.bletsos

import org.apache.spark.sql.catalyst.expressions.NamedExpression
import org.apache.spark.sql.{Dataset, SparkSession}
import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.SparkSessionExtensions


case class ReorderColumnsOnProjectOptimizationRule(spark: SparkSession) extends Rule[LogicalPlan] {
def apply(plan: LogicalPlan): LogicalPlan = plan transformDown {
case p: Project => {
val fields = p.projectList
if (checkConditions(fields, p.child)) {
val modifiedFieldsObject = optimizePlan(fields, p.child, plan)
val projectUpdated = p.copy(modifiedFieldsObject, p.child)
projectUpdated
} else {
p
}
}
}

private def checkConditions(fields: Seq[NamedExpression], child: LogicalPlan): Boolean = {
// compare UDFs computation cost and return Boolean
val needsOptimization = listHaveTwoUDFsEnabledForOptimization(fields)
if (needsOptimization) println(fields.mkString(" | "))
needsOptimization
}

private def listHaveTwoUDFsEnabledForOptimization(fields: Seq[NamedExpression]): Boolean = {
// a simple priority order based on UDF name suffix
val myPriorityList = fields.map((e) => {
if (e.name.toString().startsWith("udf")) {
Integer.parseInt(e.name.toString().split("_")(1))
} else {
0
}
}).filter(e => e > 0)

// Do UDF with less cost before, so I need change the fields order
myPriorityList.size == 2 && myPriorityList(0) > myPriorityList(1)
}

private def optimizePlan(fields: Seq[NamedExpression],
child: LogicalPlan,
plan: LogicalPlan): Seq[NamedExpression] = {
// change order on field list. Return LogicalPlan modified
val myListWithUDF = fields.filter((e) => e.name.toString().startsWith("udf"))
if (myListWithUDF.size != 2) {
throw new UnsupportedOperationException(
s"The size of UDF list have ${myListWithUDF.size} elements.")
}
val myModifiedList: Seq[NamedExpression] = Seq(myListWithUDF(1), myListWithUDF(0))
val myListWithoutUDF = fields.filter((e) => !e.name.toString().startsWith("udf"))
val modifiedFielsObject = getFieldsReordered(myListWithoutUDF, myModifiedList)
val msg = "•••• optimizePlan called : " + fields.size + " columns on Project.\n" +
"•••• fields: " + fields.mkString(" | ") + "\n" +
"•••• UDFs to reorder:\n" + myListWithUDF.mkString(" | ") + "\n" +
"•••• field list Without UDF: " + myListWithoutUDF.mkString(" | ") + "\n" +
"•••• modifiedFielsObject: " + modifiedFielsObject.mkString(" | ") + "\n"
modifiedFielsObject
}

private def getFieldsReordered(fieldsWithoutUDFs: Seq[NamedExpression],
fieldsWithUDFs: Seq[NamedExpression]): Seq[NamedExpression] = {
fieldsWithoutUDFs.union(fieldsWithUDFs)
}
}

case class R0(x: Int,
p: Option[Int] = Some((new scala.util.Random).nextInt(999)),
q: Option[Int] = Some((new scala.util.Random).nextInt(999))
)

object App {
def main(args : Array[String]) {
type ExtensionsBuilder = SparkSessionExtensions => Unit
// inject the rule here
val f: ExtensionsBuilder = { e =>
e.injectResolutionRule(ReorderColumnsOnProjectOptimizationRule)
}

val spark = SparkSession
.builder()
.withExtensions(f)
.getOrCreate()

def createDsR0(spark: SparkSession): Dataset[R0] = {
import spark.implicits._
val ds = spark.range(3)
val xdsR0 = ds.map((i) => {
R0(i.intValue() + 1)
})
// IMPORTANT: The cache here is mandatory
xdsR0.cache()
}

val dsR0 = createDsR0(spark)
val udfA_99 = (p: Int) => Math.cos(p * p) // higher cost Function
val udfB_10 = (q: Int) => q + 1 // lower cost Function

println("*** I' going to register my UDF ***")
spark.udf.register("myUdfA", udfA_99)
spark.udf.register("myUdfB", udfB_10)

val dsR1 = {
val ret1DS = dsR0.selectExpr("x", "p", "q", "myUdfA(p) as udfA_99")
val result = ret1DS.cache()
dsR0.show()
result.show()

result
}

val dsR2 = {
val ret2DS = dsR1.selectExpr("x", "p", "q", "udfA_99", "myUdfB(p) as udfB_10")
val result = ret2DS.cache()
dsR0.show()
dsR1.show()
result.show()

result
}
}
}

它将打印

+---+---+---+-------+-------------------+
| x| p| q|udfB_10| udfA_99|
+---+---+---+-------+-------------------+
| 1|392|746| 393|-0.7508388993643841|
| 2|778|582| 779| 0.9310990915956336|
| 3|661| 34| 662| 0.6523545972748773|
+---+---+---+-------+-------------------+

关于scala - 如何更改 Apache SparkSQL `Project` 运算符中的属性顺序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48612353/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com