gpt4 book ai didi

pyspark - 根据 Pyspark 中另一列中的值有条件地从同一列中的另一行值替换一行中的值?

转载 作者:行者123 更新时间:2023-12-04 17:30:37 25 4
gpt4 key购买 nike

网络上有这种变化,但并不完全符合我的期望。
我有一个像这样的数据框:

     +------+-------+------------+---------------+----------------+--------+---------+
|SEQ_ID|TOOL_ID|isfleetlevel|is_golden_limit|use_golden_limit|New_UL |New_LL |
+------+-------+------------+---------------+----------------+--------+---------+
|790026|9160 |0 |1 |0 |26.1184 |23.2954 |
|790026|13509 |0 |0 |1 |Infinity|-Infinity|
|790026|9162 |0 |0 |0 |25.03535|23.48585 |
|790026|13510 |0 |0 |1 |Infinity|-Infinity|
|790048|9162 |0 |0 |0 |33.5 |30.5 |
|790048|13509 |0 |0 |1 |Infinity|-Infinity|
|790048|13510 |0 |0 |0 |NaN |NaN |
|790048|9160 |0 |1 |0 |33.94075|30.75925 |
+------+-------+------------+---------------+----------------+--------+---------+

我要更换 New_ULNew_LL值其中 use_golden_limit是 1 的值,其中 is_golden_limit每个 SEQ_ID 为 1 .因此,在这种情况下,预期结果将是:
 +------+-------+------------+---------------+----------------+--------+---------+
|SEQ_ID|TOOL_ID|isfleetlevel|is_golden_limit|use_golden_limit|New_UL |New_LL |
+------+-------+------------+---------------+----------------+--------+---------+
|790026|9160 |0 |1 |0 |26.1184 |23.2954 |
|790026|13509 |0 |0 |1 |26.1184 |23.2954 |
|790026|9162 |0 |0 |0 |25.03535|23.48585 |
|790026|13510 |0 |0 |1 |26.1184 |23.2954 |
|790048|9162 |0 |0 |0 |33.5 |30.5 |
|790048|13509 |0 |0 |1 |33.94075|30.75925 |
|790048|13510 |0 |0 |0 |NaN |NaN |
|790048|9160 |0 |1 |0 |33.94075|30.75925 |
+------+-------+------------+---------------+----------------+--------+---------+

这可能吗?

最佳答案

根据要求,它只会为每个 ID 取 is_golden_limit 的第一个值。

创建您的数据框

from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.types import *
import numpy as np

list=[[790026,9160,0,1,0,26.1184,23.2954],
[790026,13509,0,0,1,np.inf,-np.inf],
[790026,9162,0,0,0,25.03535,23.48585],
[790026,13510,0,0,1,np.inf,-np.inf],
[790048,9162,0,0,0,33.5,30.5],
[790048,13509,0,0,1,np.inf,-np.inf],
[790048,13510,0,0,0,np.NaN,np.NaN],
[790048,9160,0,1,0,33.94075,30.75925 ]]

df= spark.createDataFrame(list,['SEQ_ID','TOOL_ID','isfleetlevel','is_golden_limit','use_golden_limit','New_UL','New_LL'])

+------+-------+------------+---------------+----------------+--------+---------+
|SEQ_ID|TOOL_ID|isfleetlevel|is_golden_limit|use_golden_limit| New_UL| New_LL|
+------+-------+------------+---------------+----------------+--------+---------+
|790026| 9160| 0| 1| 0| 26.1184| 23.2954|
|790026| 13509| 0| 0| 1|Infinity|-Infinity|
|790026| 9162| 0| 0| 0|25.03535| 23.48585|
|790026| 13510| 0| 0| 1|Infinity|-Infinity|
|790048| 9162| 0| 0| 0| 33.5| 30.5|
|790048| 13509| 0| 0| 1|Infinity|-Infinity|
|790048| 13510| 0| 0| 0| NaN| NaN|
|790048| 9160| 0| 1| 0|33.94075| 30.75925|
+------+-------+------------+---------------+----------------+--------+---------+

选择用于自联接的新数据框

并为每个 ID 首次出现 is_golden_limit 值
w=Window().partitionBy("SEQ_ID").orderBy("SEQ_ID")
df1=df.select(F.col("is_golden_limit").alias("use_golden_limit"),F.col("New_UL").alias("New_UL1"),F.col("New_LL").alias("New_LL1"),"SEQ_ID").filter(F.col("is_golden_limit")==1).withColumn('row_num',F.row_number().over(w)).filter(F.col("row_num")==1).drop("row_num")

+----------------+--------+--------+------+
|use_golden_limit| New_UL1| New_LL1|SEQ_ID|
+----------------+--------+--------+------+
| 1| 26.1184| 23.2954|790026|
| 1|33.94075|30.75925|790048|
+----------------+--------+--------+------+

使用条件连接和创建新列

df1 自然是一个小得多的数据帧,因此,最佳实践是使用广播连接(将小数据帧广播到所有节点,以便在连接中更好地协同定位)。
df2=df.join(df1.hint("broadcast"), on=['use_golden_limit','SEQ_ID'], how='left')
df3=df2.withColumn("New_UL_Final", F.when((F.col("use_golden_limit")==1),F.col("New_UL1")).otherwise(F.col("New_UL")))\
.withColumn("New_LL_Final", F.when((F.col("use_golden_limit")==1),F.col("New_LL1")).otherwise(F.col("New_LL")))\
.orderBy("SEQ_ID").drop("New_UL","New_LL","New_LL1","New_UL1")

选择最终数据框和 .show()
df4=df3.select("SEQ_ID","TOOL_ID","isfleetlevel","is_golden_limit","use_golden_limit",F.col("New_UL_Final").alias("New_UL"),
F.col("New_LL_Final").alias("New_LL"))
df4.show()

最终数据框:
+------+-------+------------+---------------+----------------+--------+--------+
|SEQ_ID|TOOL_ID|isfleetlevel|is_golden_limit|use_golden_limit| New_UL| New_LL|
+------+-------+------------+---------------+----------------+--------+--------+
|790026| 13510| 0| 0| 1| 26.1184| 23.2954|
|790026| 9162| 0| 0| 0|25.03535|23.48585|
|790026| 13509| 0| 0| 1| 26.1184| 23.2954|
|790026| 9160| 0| 1| 0| 26.1184| 23.2954|
|790048| 13509| 0| 0| 1|33.94075|30.75925|
|790048| 9160| 0| 1| 0|33.94075|30.75925|
|790048| 9162| 0| 0| 0| 33.5| 30.5|
|790048| 13510| 0| 0| 0| NaN| NaN|
+------+-------+------------+---------------+----------------+--------+--------+

关于pyspark - 根据 Pyspark 中另一列中的值有条件地从同一列中的另一行值替换一行中的值?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60066879/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com