gpt4 book ai didi

apache-spark - 在Spark中获取组的最后一个值

转载 作者:行者123 更新时间:2023-12-04 05:30:56 24 4
gpt4 key购买 nike

我有一个SparkR DataFrame,如下所示:

#Create R data.frame
custId <- c(rep(1001, 5), rep(1002, 3), 1003)
date <- c('2013-08-01','2014-01-01','2014-02-01','2014-03-01','2014-04-01','2014-02-01','2014-03-01','2014-04-01','2014-04-01')
desc <- c('New','New','Good','New', 'Bad','New','Good','Good','New')
newcust <- c(1,1,0,1,0,1,0,0,1)
df <- data.frame(custId, date, desc, newcust)

#Create SparkR DataFrame
df <- createDataFrame(df)
display(df)
custId| date | desc | newcust
--------------------------------------
1001 | 2013-08-01| New | 1
1001 | 2014-01-01| New | 1
1001 | 2014-02-01| Good | 0
1001 | 2014-03-01| New | 1
1001 | 2014-04-01| Bad | 0
1002 | 2014-02-01| New | 1
1002 | 2014-03-01| Good | 0
1002 | 2014-04-01| Good | 0
1003 | 2014-04-01| New | 1
newcust表示每次出现新的 custId时,或者同一 custIddesc还原为“新建”时,都会指示新客户。我想要获得的是每个 desc分组的最后 newcust值,同时保持每个分组的第一个 date。以下是我要获取的DataFrame。如何在Spark中执行此操作? PySpark或SparkR代码均可使用。
#What I want 
custId| date | newcust | finaldesc
----------------------------------------------
1001 | 2013-08-01| 1 | New
1001 | 2014-01-01| 1 | Good
1001 | 2014-03-01| 1 | Bad
1002 | 2014-02-01| 1 | Good
1003 | 2014-04-01| 1 | New

最佳答案

我不知道sparkR,所以我会在pyspark中回答。
您可以使用窗口功能来实现。

首先,让我们定义“newcust的分组”,您希望将newcust等于1的每一行作为新组的开始,计算累计和将达到目的:

from pyspark.sql import Window
import pyspark.sql.functions as psf

w1 = Window.partitionBy("custId").orderBy("date")
df1 = df.withColumn("subgroup", psf.sum("newcust").over(w1))

+------+----------+----+-------+--------+
|custId| date|desc|newcust|subgroup|
+------+----------+----+-------+--------+
| 1001|2013-08-01| New| 1| 1|
| 1001|2014-01-01| New| 1| 2|
| 1001|2014-02-01|Good| 0| 2|
| 1001|2014-03-01| New| 1| 3|
| 1001|2014-04-01| Bad| 0| 3|
| 1002|2014-02-01| New| 1| 1|
| 1002|2014-03-01|Good| 0| 1|
| 1002|2014-04-01|Good| 0| 1|
| 1003|2014-04-01| New| 1| 1|
+------+----------+----+-------+--------+

对于每个 subgroup,我们想保留第一个日期:

w2 = Window.partitionBy("custId", "subgroup")
df2 = df1.withColumn("first_date", psf.min("date").over(w2))

+------+----------+----+-------+--------+----------+
|custId| date|desc|newcust|subgroup|first_date|
+------+----------+----+-------+--------+----------+
| 1001|2013-08-01| New| 1| 1|2013-08-01|
| 1001|2014-01-01| New| 1| 2|2014-01-01|
| 1001|2014-02-01|Good| 0| 2|2014-01-01|
| 1001|2014-03-01| New| 1| 3|2014-03-01|
| 1001|2014-04-01| Bad| 0| 3|2014-03-01|
| 1002|2014-02-01| New| 1| 1|2014-02-01|
| 1002|2014-03-01|Good| 0| 1|2014-02-01|
| 1002|2014-04-01|Good| 0| 1|2014-02-01|
| 1003|2014-04-01| New| 1| 1|2014-04-01|
+------+----------+----+-------+--------+----------+

最后,我们要保留每个 subgroup的最后一行(按日期排序):

w3 = Window.partitionBy("custId", "subgroup").orderBy(psf.desc("date"))
df3 = df2.withColumn(
"rn",
psf.row_number().over(w3)
).filter("rn = 1").select(
"custId",
psf.col("first_date").alias("date"),
"desc"
)

+------+----------+----+
|custId| date|desc|
+------+----------+----+
| 1001|2013-08-01| New|
| 1001|2014-01-01|Good|
| 1001|2014-03-01| Bad|
| 1002|2014-02-01|Good|
| 1003|2014-04-01| New|
+------+----------+----+

关于apache-spark - 在Spark中获取组的最后一个值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45739037/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com