gpt4 book ai didi

在python中按时加入两个 Spark 数据帧(TimestampType)

转载 作者:行者123 更新时间:2023-12-04 18:39:01 25 4
gpt4 key购买 nike

我有两个数据帧,我想基于一列加入它们,但需要注意的是,该列是一个时间戳,并且该时间戳必须在某个偏移量(5 秒)内才能加入记录。更具体地说,记录在 dates_dfdate=1/3/2015:00:00:00应该加入 events_dftime=1/3/2015:00:00:01因为两个时间戳彼此相差 5 秒以内。

我试图让这个逻辑与 python spark 一起工作,这非常痛苦。人们如何在 Spark 中进行这样的连接?

我的方法是向 dates_df 添加两列额外的列。这将确定 lower_timestampupper_timestamp以 5 秒偏移为界,并执行条件连接。这就是它失败的地方,更具体地说:

joined_df = dates_df.join(events_df, 
dates_df.lower_timestamp < events_df.time < dates_df.upper_timestamp)

joined_df.explain()

仅捕获查询的最后一部分:
Filter (time#6 < upper_timestamp#4)
CartesianProduct
....

它给了我错误的结果。

我真的必须对每个不等式进行完整的笛卡尔连接,并在进行过程中删除重复项吗?

这是完整的代码:
from datetime import datetime, timedelta

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import udf


master = 'local[*]'
app_name = 'stackoverflow_join'

conf = SparkConf().setAppName(app_name).setMaster(master)
sc = SparkContext(conf=conf)

sqlContext = SQLContext(sc)

def lower_range_func(x, offset=5):
return x - timedelta(seconds=offset)

def upper_range_func(x, offset=5):
return x + timedelta(seconds=offset)


lower_range = udf(lower_range_func, TimestampType())
upper_range = udf(upper_range_func, TimestampType())

dates_fields = [StructField("name", StringType(), True), StructField("date", TimestampType(), True)]
dates_schema = StructType(dates_fields)

dates = [('day_%s' % x, datetime(year=2015, day=x, month=1)) for x in range(1,5)]
dates_df = sqlContext.createDataFrame(dates, dates_schema)

dates_df.show()

# extend dates_df with time ranges
dates_df = dates_df.withColumn('lower_timestamp', lower_range(dates_df['date'])).\
withColumn('upper_timestamp', upper_range(dates_df['date']))


event_fields = [StructField("time", TimestampType(), True), StructField("event", StringType(), True)]
event_schema = StructType(event_fields)

events = [(datetime(year=2015, day=3, month=1, second=3), 'meeting')]
events_df = sqlContext.createDataFrame(events, event_schema)

events_df.show()

# finally, join the data
joined_df = dates_df.join(events_df,
dates_df.lower_timestamp < events_df.time < dates_df.upper_timestamp)

joined_df.show()

我得到以下输出:
+-----+--------------------+
| name| date|
+-----+--------------------+
|day_1|2015-01-01 00:00:...|
|day_2|2015-01-02 00:00:...|
|day_3|2015-01-03 00:00:...|
|day_4|2015-01-04 00:00:...|
+-----+--------------------+

+--------------------+-------+
| time| event|
+--------------------+-------+
|2015-01-03 00:00:...|meeting|
+--------------------+-------+


+-----+--------------------+--------------------+--------------------+--------------------+-------+
| name| date| lower_timestamp| upper_timestamp| time| event|
+-----+--------------------+--------------------+--------------------+--------------------+-------+
|day_3|2015-01-03 00:00:...|2015-01-02 23:59:...|2015-01-03 00:00:...|2015-01-03 00:00:...|meeting|
|day_4|2015-01-04 00:00:...|2015-01-03 23:59:...|2015-01-04 00:00:...|2015-01-03 00:00:...|meeting|
+-----+--------------------+--------------------+--------------------+--------------------+-------+

最佳答案

我确实用 explain() 触发了 SQL 查询看看它是如何完成的,并在 python 中复制了相同的行为。首先是如何对 SQL spark 执行相同的操作:

dates_df.registerTempTable("dates")
events_df.registerTempTable("events")
results = sqlContext.sql("SELECT * FROM dates INNER JOIN events ON dates.lower_timestamp < events.time and events.time < dates.upper_timestamp")
results.explain()

这行得通,但问题是如何在 python 中做到这一点,所以解决方案似乎只是一个简单的连接,然后是两个过滤器:
joined_df = dates_df.join(events_df).filter(dates_df.lower_timestamp < events_df.time).filter(events_df.time < dates_df.upper_timestamp)
joined_df.explain()产生与 sql spark results.explain() 相同的查询所以我认为这就是事情的完成方式。

关于在python中按时加入两个 Spark 数据帧(TimestampType),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30630296/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com