gpt4 book ai didi

scala - Spark : Replicate each row but with change in one column value

转载 作者:行者123 更新时间:2023-12-04 15:20:17 25 4
gpt4 key购买 nike

如何在spark中进行如下操作,

Initially:
+-----------+-----+------+
|date |col1 | col2 |
+-----------+-----+------+
|2020-08-16 | 2 | abc |
|2020-08-17 | 3 | def |
|2020-08-18 | 4 | ghi |
|2020-08-19 | 5 | jkl |
|2020-08-20 | 6 | mno |
+-----------+-----+------+

Final result:
+-----------+-----+------+
|date |col1 | col2 |
+-----------+-----+------+
|2020-08-16 | 2 | abc |
|2020-08-15 | 2 | abc |
|2020-08-17 | 3 | def |
|2020-08-16 | 3 | def |
|2020-08-18 | 4 | ghi |
|2020-08-17 | 4 | ghi |
|2020-08-19 | 5 | jkl |
|2020-08-18 | 5 | jkl |
|2020-08-20 | 6 | mno |
|2020-08-19 | 6 | mno |
+-----------+-----+------+

因此本质上需要复制每一行并更改其中一个列值,即对于每一行,复制日期列为当前值的负 1 天。

最佳答案

尝试使用 date_add 函数,然后创建包含日期列和 date-1 列的数组,最后分解该列。

示例:

df.show()

/*
+----------+----+----+
| date|col1|col2|
+----------+----+----+
|2020-08-16| 2| abc|
|2020-08-17| 3| def|
+----------+----+----+
*/

import org.apache.spark.sql.functions._

df.withColumn("new_date",array(col("date"),date_add(col("date"),-1))).
drop("date").
selectExpr("explode(new_date) as date","*").
drop("new_date").
show(10,false)

/*
+----------+----+----+
|date |col1|col2|
+----------+----+----+
|2020-08-16|2 |abc |
|2020-08-15|2 |abc |
|2020-08-17|3 |def |
|2020-08-16|3 |def |
+----------+----+----+
*/

关于scala - Spark : Replicate each row but with change in one column value,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63459350/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com