gpt4 book ai didi

scala - Spark Scala : moving average for multiple columns

转载 作者:行者123 更新时间:2023-12-04 01:44:10 27 4
gpt4 key购买 nike

输入:

val customers = sc.parallelize(List(("Alice", "2016-05-01", 50.00,4),
("Alice", "2016-05-03", 45.00,2),
("Alice", "2016-05-04", 55.00,4),
("Bob", "2016-05-01", 25.00,6),
("Bob", "2016-05-04", 29.00,7),
("Bob", "2016-05-06", 27.00,10))).
toDF("name", "date", "amountSpent","NumItems")

程序:
 // Import the window functions.
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

// Create a window spec.
val wSpec1 = Window.partitionBy("name").orderBy("date").rowsBetween(-1, 1)

在此窗口规范中,数据按客户分区。每个客户的数据按日期排序。并且,窗口框被定义为从-1(当前行前一行)开始到1(当前行后一行)结束,滑动窗口共3行。 问题是对列列表 进行基于窗口的求和。在这种情况下,它们是“amountSpent”、“NumItems”。但问题最多可能有数百列。

下面是对每列进行基于窗口求和的解决方案。但是, 如何更有效地进行求和? 因为我们不需要每次都为每一列查找滑动窗口行。
 // Calculate the sum of spent
customers.withColumn("sumSpent",sum(customers("amountSpent")).over(wSpec1)).show()

+-----+----------+-----------+--------+--------+
| name| date|amountSpent|NumItems|sumSpent|
+-----+----------+-----------+--------+--------+
|Alice|2016-05-01| 50.0| 4| 95.0|
|Alice|2016-05-03| 45.0| 2| 150.0|
|Alice|2016-05-04| 55.0| 4| 100.0|
| Bob|2016-05-01| 25.0| 6| 54.0|
| Bob|2016-05-04| 29.0| 7| 81.0|
| Bob|2016-05-06| 27.0| 10| 56.0|
+-----+----------+-----------+--------+--------+

// Calculate the sum of items
customers.withColumn( "sumItems",
sum(customers("NumItems")).over(wSpec1) ).show()

+-----+----------+-----------+--------+--------+
| name| date|amountSpent|NumItems|sumItems|
+-----+----------+-----------+--------+--------+
|Alice|2016-05-01| 50.0| 4| 6|
|Alice|2016-05-03| 45.0| 2| 10|
|Alice|2016-05-04| 55.0| 4| 6|
| Bob|2016-05-01| 25.0| 6| 13|
| Bob|2016-05-04| 29.0| 7| 23|
| Bob|2016-05-06| 27.0| 10| 17|
+-----+----------+-----------+--------+--------+

最佳答案

目前,我想,不可能使用 Window 函数更新多列。你可以表现得好像它同时发生在下面

val customers = sc.parallelize(List(("Alice", "2016-05-01", 50.00,4),
("Alice", "2016-05-03", 45.00,2),
("Alice", "2016-05-04", 55.00,4),
("Bob", "2016-05-01", 25.00,6),
("Bob", "2016-05-04", 29.00,7),
("Bob", "2016-05-06", 27.00,10))).
toDF("name", "date", "amountSpent","NumItems")

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

// Create a window spec.
val wSpec1 = Window.partitionBy("name").orderBy("date").rowsBetween(-1, 1)
var tempdf = customers
val colNames = List("amountSpent", "NumItems")
for(column <- colNames){
tempdf = tempdf.withColumn(column+"Sum", sum(tempdf(column)).over(wSpec1))
}
tempdf.show(false)

你应该有输出
+-----+----------+-----------+--------+--------------+-----------+
|name |date |amountSpent|NumItems|amountSpentSum|NumItemsSum|
+-----+----------+-----------+--------+--------------+-----------+
|Bob |2016-05-01|25.0 |6 |54.0 |13 |
|Bob |2016-05-04|29.0 |7 |81.0 |23 |
|Bob |2016-05-06|27.0 |10 |56.0 |17 |
|Alice|2016-05-01|50.0 |4 |95.0 |6 |
|Alice|2016-05-03|45.0 |2 |150.0 |10 |
|Alice|2016-05-04|55.0 |4 |100.0 |6 |
+-----+----------+-----------+--------+--------------+-----------+

关于scala - Spark Scala : moving average for multiple columns,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44783237/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com