gpt4 book ai didi

python - 根据列获取 pyspark 中的本地时间

转载 作者:太空宇宙 更新时间:2023-11-04 04:34:48 25 4
gpt4 key购买 nike

在pyspark中,可以通过传递时间戳和时区从UTC时间获取本地时间to the function from_utc_timestamp

>>> df = spark.createDataFrame([('1997-02-28 10:30:00',)], ['t'])
>>> df.select(from_utc_timestamp(df.t, "PST").alias('t')).collect()
[Row(t=datetime.datetime(1997, 2, 28, 2, 30))]

此处的时区以字符串文字(“PST”)的形式提供。如果一个人具有以下数据结构:

+--------------------------+---------+
| utc_time |timezone |
+--------------------------+---------+
| 2018-08-03T23:27:30.000Z| PST |
| 2018-08-03T23:27:30.000Z| GMT |
| 2018-08-03T23:27:30.000Z| SGT |
+--------------------------+---------+

如何实现以下新列(最好没有 UDF)?

+--------------------------+-----------------------------------+
| utc_time |timezone | local_time |
+--------------------------+-----------------------------------+
| 2018-08-03T23:27:30.000Z| PST | 2018-08-03T15:27:30.000 |
| 2018-08-03T23:27:30.000Z| GMT | 2018-08-04T00:27:30.000 |
| 2018-08-03T23:27:30.000Z| SGT | 2018-08-04T07:27:30.000 |
+--------------------------+-----------------------------------+

最佳答案

使用 pyspark.sql.functions.expr() rather the the dataframe API ,这可以通过以下方式实现:

import pyspark.sql.functions as F

df = df.select(
'*',
F.expr('from_utc_timestamp(utc_time, timezone)').alias("timestamp_local")
)

但是,不推荐使用 3 个字母的时区。根据 Java docs :

For compatibility with JDK 1.1.x, some other three-letter time zone IDs (such as "PST", "CTT", "AST") are also supported. However, their use is deprecated because the same abbreviation is often used for multiple time zones (for example, "CST" could be U.S. "Central Standard Time" and "China Standard Time"), and the Java platform can then only recognize one of them.

关于python - 根据列获取 pyspark 中的本地时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51975317/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com