gpt4 book ai didi

scala - 使用 Spark Scala 进行透视后按名称选择具有多个聚合列的列

转载 作者:行者123 更新时间:2023-12-03 00:37:14 24 4
gpt4 key购买 nike

我正在尝试在 Scala Spark 2.0.1 中的数据透视后聚合多个列:

scala> val df = List((1, 2, 3, None), (1, 3, 4, Some(1))).toDF("a", "b", "c", "d")
df: org.apache.spark.sql.DataFrame = [a: int, b: int ... 2 more fields]

scala> df.show
+---+---+---+----+
| a| b| c| d|
+---+---+---+----+
| 1| 2| 3|null|
| 1| 3| 4| 1|
+---+---+---+----+

scala> val pivoted = df.groupBy("a").pivot("b").agg(max("c"), max("d"))
pivoted: org.apache.spark.sql.DataFrame = [a: int, 2_max(`c`): int ... 3 more fields]

scala> pivoted.show
+---+----------+----------+----------+----------+
| a|2_max(`c`)|2_max(`d`)|3_max(`c`)|3_max(`d`)|
+---+----------+----------+----------+----------+
| 1| 3| null| 4| 1|
+---+----------+----------+----------+----------+

到目前为止,我无法选择或重命名这些列:

scala> pivoted.select("3_max(`d`)")
org.apache.spark.sql.AnalysisException: syntax error in attribute name: 3_max(`d`);

scala> pivoted.select("`3_max(`d`)`")
org.apache.spark.sql.AnalysisException: syntax error in attribute name: `3_max(`d`)`;

scala> pivoted.select("`3_max(d)`")
org.apache.spark.sql.AnalysisException: cannot resolve '`3_max(d)`' given input columns: [2_max(`c`), 3_max(`d`), a, 2_max(`d`), 3_max(`c`)];

这里一定有一个简单的技巧,有什么想法吗?谢谢。

最佳答案

看起来像是一个错误,后面的刻度引起了问题。这里的一个修复方法是删除列名称中的反引号:

val pivotedNewName = pivoted.columns.foldLeft(pivoted)((df, col) => 
df.withColumnRenamed(col, col.replace("`", "")))

现在您可以像平常一样按列名称进行选择:

pivotedNewName.select("2_max(c)").show
+--------+
|2_max(c)|
+--------+
| 3|
+--------+

关于scala - 使用 Spark Scala 进行透视后按名称选择具有多个聚合列的列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41797744/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com