gpt4 book ai didi

apache-spark - spark数据帧过滤操作

转载 作者:行者123 更新时间:2023-12-04 10:55:33 27 4
gpt4 key购买 nike

我有一个 Spark 数据框,然后过滤字符串应用,过滤器只选择一些行,但我想知道未选择行的原因。
例子:

数据框列:customer_id|col_a|col_b|col_c|col_d
过滤字符串:col_a > 0 & col_b > 4 & col_c < 0 & col_d=0
enter image description here

等等...
reason_for_exclusion可以是任何字符串或字母,只要它说明为什么排除特定行即可。

我可以拆分过滤器字符串并应用每个过滤器,但是我有很大的过滤器字符串并且效率低下所以只是检查是否有更好的方法来执行此操作?

谢谢

最佳答案

您必须检查过滤器表达式中的每个条件,这对于过滤的简单操作来说可能很昂贵。
我建议为所有过滤的行显示相同的原因,因为它至少满足该表达式中的一个条件。它并不漂亮,但我更喜欢它,因为它很有效,尤其是当您必须处理非常大的 DataFrame 时。

data = [(1, 1, 5, -3, 0),(2, 0, 10, -1, 0), (3, 0, 10, -4, 1),]
df = spark.createDataFrame(data, ["customer_id", "col_a", "col_b", "col_c", "col_d"])

filter_expr = "col_a > 0 AND col_b > 4 AND col_c < 0 AND col_d=0"

filtered_df = df.withColumn("reason_for_exclusion",
when(~expr(filter_expr),lit(filter_expr)
).otherwise(lit(None))
)
filtered_df.show(truncate=False)

输出:
+-----------+-----+-----+-----+-----+-------------------------------------------------+
|customer_id|col_a|col_b|col_c|col_d|reason_for_exclusion |
+-----------+-----+-----+-----+-----+-------------------------------------------------+
|1 |1 |5 |-3 |0 |null |
|2 |0 |10 |-1 |0 |col_a > 0 AND col_b > 4 AND col_c < 0 AND col_d=0|
|3 |0 |10 |-4 |1 |col_a > 0 AND col_b > 4 AND col_c < 0 AND col_d=0|
+-----------+-----+-----+-----+-----+-------------------------------------------------+

编辑:

现在,如果您真的只想显示失败的条件,您可以将每个条件转换为单独的列并使用 DataFrame select进行计算。然后你必须检查评估为 False 的列知道哪个条件失败了。

您可以通过 <PREFIX>_<condition> 命名这些列以便您以后可以轻松识别它们。这是一个完整的例子:
filter_expr = "col_a > 0 AND col_b > 4 AND col_c < 0 AND col_d=0"
COLUMN_FILTER_PREFIX = "filter_validation_"
original_columns = [col(c) for c in df.columns]

# create column for each condition in filter expression
condition_columns = [expr(f).alias(COLUMN_FILTER_PREFIX + f) for f in filter_expr.split("AND")]

# evaluate condition to True/False and persist the DF with calculated columns
filtered_df = df.select(original_columns + condition_columns)
filtered_df = filtered_df.persist(StorageLevel.MEMORY_AND_DISK)

# get back columns we calculated for filter
filter_col_names = [c for c in filtered_df.columns if COLUMN_FILTER_PREFIX in c]
filter_columns = list()
for c in filter_col_names:
filter_columns.append(
when(~col(f"`{c}`"),
lit(f"{c.replace(COLUMN_FILTER_PREFIX, '')}")
)
)
array_reason_filter = array_except(array(*filter_columns), array(lit(None)))
df_with_filter_reason = filtered_df.withColumn("reason_for_exclusion", array_reason_filter)

df_with_filter_reason.select(*original_columns, col("reason_for_exclusion")).show(truncate=False)

# output
+-----------+-----+-----+-----+-----+----------------------+
|customer_id|col_a|col_b|col_c|col_d|reason_for_exclusion |
+-----------+-----+-----+-----+-----+----------------------+
|1 |1 |5 |-3 |0 |[] |
|2 |0 |10 |-1 |0 |[col_a > 0 ] |
|3 |0 |10 |-4 |1 |[col_a > 0 , col_d=0]|
+-----------+-----+-----+-----+-----+----------------------+

关于apache-spark - spark数据帧过滤操作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59229793/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com