gpt4 book ai didi

sql - PySpark SQL 中具有重叠行的 GROUP BY

转载 作者:行者123 更新时间:2023-12-04 08:32:51 25 4
gpt4 key购买 nike

下表是使用 Parquet/PySpark 创建的,目的是聚合行 1 < count < 5和行在哪里 2 < count < 6 .请注意 count 所在的行是 4.1 落在这两个范围内。

+-----+-----+
|count|value|
+-----+-----+
| 1.1| 1|
| 1.2| 2|
| 4.1| 3|
| 5.5| 4|
| 5.6| 5|
| 5.7| 6|
+-----+-----+
这是创建并读取上表作为 PySpark DataFrame 的代码。
import pandas as pd
import pyarrow.parquet as pq
import pyarrow as pa
from pyspark import SparkContext, SQLContext


# create Parquet DataFrame
pdf = pd.DataFrame({
'count': [1.1, 1.2, 4.1, 5.5, 5.6, 5.7],
'value': [1, 2, 3, 4, 5, 6]})
table = pa.Table.from_pandas(pdf)
pq.write_to_dataset(table, r'c:/data/data.parquet')

# read Parquet DataFrame and create view
sc = SparkContext()
sql = SQLContext(sc)
df = sql.read.parquet(r'c:/data/data.parquet')
df.createTempView('data')
该操作可以使用两个单独的查询。
q1 = sql.sql("""
SELECT AVG(value) AS va
FROM data
WHERE count > 1
AND count < 5
""")
+---+
| va|
+---+
|2.0|
+---+
并且,类似地
q2 = sql.sql("""
SELECT AVG(value) as va
FROM data
WHERE count > 2
AND count < 6
""")
+---+
| va|
+---+
|4.5|
+---+
但是我想在一个有效的查询中做到这一点。
这是一种不起作用的方法,因为 count 所在的行is 4.1 仅包含在一组中。
qc = sql.sql("""
SELECT AVG(value) AS va,
(CASE WHEN count > 1 AND count < 5 THEN 1
WHEN count > 2 AND count < 6 THEN 2
ELSE 0 END) AS id
FROM data
GROUP BY id
""")
上面的查询产生
+---+---+
| va| id|
+---+---+
|2.0| 1|
|5.0| 2|
+---+---+
要清楚想要的结果更像是
+---+---+
| va| id|
+---+---+
|2.0| 1|
|4.5| 2|
+---+---+

最佳答案

最简单的方法可能是 union all :

SELECT 1, AVG(value) AS va
FROM data
WHERE count > 1 AND count < 5
UNION ALL
SELECT 2, AVG(value) as va
FROM data
WHERE count > 2 AND count < 6;
您也可以将其表述为:
select r.id, avg(d.value)
from data d join
(select 1 as lo, 5 as hi, 1 as id union all
select 2 as lo, 6 as hi, 2 as id
) r
on d.count > r.lo and d.count < r.hi
group by r.id;

关于sql - PySpark SQL 中具有重叠行的 GROUP BY,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64939188/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com