gpt4 book ai didi

apache-spark - spark 3.0- spark 聚合函数给出了与预期不同的表达式

转载 作者:行者123 更新时间:2023-12-04 08:52:36 42 4
gpt4 key购买 nike

/Downloads/spark-3.0.1-bin-hadoop2.7/bin$ ./spark-shell


20/09/23 10:58:45 WARN Utils: Your hostname, byte-nihal resolves to a loopback address: 127.0.1.1; using 192.168.2.103 instead (on interface enp2s0)
20/09/23 10:58:45 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/09/23 10:58:49 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://192.168.2.103:4040
Spark context available as 'sc' (master = local[*], app id = local-1600838949311).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.0.1
/_/

Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_265)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> println(countDistinct("x"))
count(x)

scala> println(sumDistinct("x"))
sum(DISTINCT x)

scala> println(sum("x"))
sum(x)

scala> println(count("x"))
count(x)
问题:
  • 用于 sumDistinct 表达式 -> sum(DISTINCT x)
  • 但是对于 countDistinct 表达式 -> count(x)

  • 这是某种错误还是功能?
    spark 3.0 doc

    note: countDistinct gives correct expression -> count( Distinct x ) in spark version < 3.0

    最佳答案

    正如@Shaido 在评论部分中提到的那样......我已经验证了几件事情来指出 toString 中最新版本的 Spark 代码中存在一些错误。 (这可能是我不完全确定的错误或功能)
    spark 代码版本 < 3.X 行为

    import org.apache.spark.sql.functions._

    println(countDistinct("x")) ---> gives output as count(x)
    如果我们特别检查 countDistinct("x") 的源代码
      def countDistinct(columnName: String, columnNames: String*): Column =
    countDistinct(Column(columnName), columnNames.map(Column.apply) : _*)

    def countDistinct(expr: Column, exprs: Column*): Column = {
    withAggregateFunction(Count.apply((expr +: exprs).map(_.expr)), isDistinct = true)
    }
    正如您在第二个重载方法中看到的 计数申请使用聚合函数和 isDistinct=true 算作不同的值
    private def withAggregateFunction(
    func: AggregateFunction,
    isDistinct: Boolean = false): Column = {
    Column(func.toAggregateExpression(isDistinct))
    }
    如果你特别检查 withAggregateFunction 它返回的签名 Column类型,如果您检查 Column 的 toString 方法
     def toPrettySQL(e: Expression): String = usePrettyExpression(e).sql
    它叫 .sql 方法在 AggregateExpression
    AggregateExpression 按照下面的代码回调aggregateFunction的sql方法 override def sql: String = aggregateFunction.sql(isDistinct)在我们的例子中 AggregateFuncion 是计数 .
    def sql(isDistinct: Boolean): String = {
    val distinct = if (isDistinct) "DISTINCT " else ""
    s"$prettyName($distinct${children.map(_.sql).mkString(", ")})"
    }
    根据上面的代码,它应该返回 count(DISTINCT x)
    现在,在 Spark 版本> = 3.X
    我检查了源代码, toString 的行为几乎没有什么不同。
    @scala.annotation.varargs
    def countDistinct(expr: Column, exprs: Column*): Column =
    // For usage like countDistinct("*"), we should let analyzer expand star and
    // resolve function.
    Column(UnresolvedFunction("count", (expr +: exprs).map(_.expr), isDistinct = true))
    它现在使用 UnresolvedFunction 而不是 withAggregateFunction。
    UnresolvedFunction toString 方法非常简单,如下所示
    override def toString: String = s"'$name(${children.mkString(", ")})"
    打印 count(x) .. 这就是为什么您将输出作为 count(x)

    关于apache-spark - spark 3.0- spark 聚合函数给出了与预期不同的表达式,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64021660/

    42 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com