gpt4 book ai didi

python - 如何使用 Apache Spark Dataframes 执行 Switch 语句 (Python)

转载 作者:行者123 更新时间:2023-11-30 22:57:19 25 4
gpt4 key购买 nike

我正在尝试对我的数据执行一项操作,其中如果某个值符合其中一个条件,则该值将被映射到预先确定的值列表,否则将映射到失败值。

这将是等效的 SQL:

CASE
WHEN user_agent LIKE \'%CanvasAPI%\' THEN \'api\'
WHEN user_agent LIKE \'%candroid%\' THEN \'mobile_app_android\'
WHEN user_agent LIKE \'%iCanvas%\' THEN \'mobile_app_ios\'
WHEN user_agent LIKE \'%CanvasKit%\' THEN \'mobile_app_ios\'
WHEN user_agent LIKE \'%Windows NT%\' THEN \'desktop\'
WHEN user_agent LIKE \'%MacBook%\' THEN \'desktop\'
WHEN user_agent LIKE \'%iPhone%\' THEN \'mobile\'
WHEN user_agent LIKE \'%iPod Touch%\' THEN \'mobile\'
WHEN user_agent LIKE \'%iPad%\' THEN \'mobile\'
WHEN user_agent LIKE \'%iOS%\' THEN \'mobile\'
WHEN user_agent LIKE \'%CrOS%\' THEN \'desktop\'
WHEN user_agent LIKE \'%Android%\' THEN \'mobile\'
WHEN user_agent LIKE \'%Linux%\' THEN \'desktop\'
WHEN user_agent LIKE \'%Mac OS%\' THEN \'desktop\'
WHEN user_agent LIKE \'%Macintosh%\' THEN \'desktop\'
ELSE \'other_unknown\'
END AS user_agent_type

我对 Spark 还很陌生,所以我第一次尝试这个程序时使用了查找字典并在 RDD 中逐行调整值,如下所示:

USER_AGENT_VALS = {
'CanvasAPI': 'api',
'candroid': 'mobile_app_android',
'iCanvas': 'mobile_app_ios',
'CanvasKit': 'mobile_app_ios',
'Windows NT': 'desktop',
'MacBook': 'desktop',
'iPhone': 'mobile',
'iPod Touch': 'mobile',
'iPad': 'mobile',
'iOS': 'mobile',
'CrOS': 'desktop',
'Android': 'mobile',
'Linux': 'desktop',
'Mac OS': 'desktop',
'Macintosh': 'desktop'
}

def parse_requests(line: list,
id_data: dict,
user_vals: dict = USER_AGENT_VALS):
"""
Expects an input list which maps to the following indexes:
0: user_id
1: context(course)_id
2: request_month
3: user_agent_type

:param line: A list of values.
:return: A list
"""
found = False
for key, value in user_vals.items():
if key in line[3]:
found = True
line[3] = value
if not found:
line[3] = 'other_unknown'
# Retrieves the session_id count from the id_data dictionary using
# the user_id as the key.
session_count = id_data[str(line[0])]
line.append(session_count)
line.extend(config3.ETL_LIST)
return [str(item) for item in line]

我当前的代码将数据存储在dataframe中,并且我不确定如何最有效地执行上述操作。我知道它们是不可变的,因此需要将其作为新的数据帧返回,但我的问题是如何最好地做到这一点。这是我的代码:

from boto3 import client
import psycopg2 as ppg2
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import current_date, date_format, lit, StringType

EMR_CLIENT = client('emr')
conf = SparkConf().setAppName('Canvas Requests Logs')
sc = SparkContext(conf=conf)
sql_context = SQLContext(sc)
# for dependencies
# sc.addPyFile()

USER_AGENT_VALS = {
'CanvasAPI': 'api',
'candroid': 'mobile_app_android',
'iCanvas': 'mobile_app_ios',
'CanvasKit': 'mobile_app_ios',
'Windows NT': 'desktop',
'MacBook': 'desktop',
'iPhone': 'mobile',
'iPod Touch': 'mobile',
'iPad': 'mobile',
'iOS': 'mobile',
'CrOS': 'desktop',
'Android': 'mobile',
'Linux': 'desktop',
'Mac OS': 'desktop',
'Macintosh': 'desktop'
}

if __name__ == '__main__':
df = sql_context.read.parquet(
r'/Users/mharris/PycharmProjects/etl3/pyspark/Datasets/'
r'usage_data.gz.parquet')

course_data = df.filter(df['context_type'] == 'Course')
request_data = df.select(
df['user_id'],
df['context_id'].alias('course_id'),
date_format(df['request_timestamp'], 'MM').alias('request_month'),
df['user_agent']
)

sesh_id_data = df.groupBy('user_id').count()

joined_data = request_data.join(
sesh_id_data,
on=request_data['user_id'] == sesh_id_data['user_id']
).drop(sesh_id_data['user_id'])

all_fields = joined_data.withColumn(
'etl_requests_usage', lit('DEV')
).withColumn(
'etl_datetime_local', current_date()
).withColumn(
'etl_transformation_name', lit('agg_canvas_logs_user_agent_types')
).withColumn(
'etl_pdi_version', lit(r'Apache Spark')
).withColumn(
'etl_pdi_build_version', lit(r'1.6.1')
).withColumn(
'etl_pdi_hostname', lit(r'N/A')
).withColumn(
'etl_pdi_ipaddress', lit(r'N/A')
).withColumn(
'etl_checksum_md5', lit(r'N/A')
)

作为 PS,有没有比我的方法更好的添加列的方法?

最佳答案

如果您愿意,甚至可以直接使用 SQL 表达式:

expr = """
CASE
WHEN user_agent LIKE \'%Android%\' THEN \'mobile\'
WHEN user_agent LIKE \'%Linux%\' THEN \'desktop\'
ELSE \'other_unknown\'
END AS user_agent_type"""

df = sc.parallelize([
(1, "Android"), (2, "Linux"), (3, "Foo")
]).toDF(["id", "user_agent"])

df.selectExpr("*", expr).show()
## +---+----------+---------------+
## | id|user_agent|user_agent_type|
## +---+----------+---------------+
## | 1| Android| mobile|
## | 2| Linux| desktop|
## | 3| Foo| other_unknown|
## +---+----------+---------------+

否则,您可以将其替换为 whenlike 以及 otherwise 的组合:

from pyspark.sql.functions import col, when
from functools import reduce

c = col("user_agent")
vs = [("Android", "mobile"), ("Linux", "desktop")]
expr = reduce(
lambda acc, kv: when(c.like(kv[0]), kv[1]).otherwise(acc),
vs,
"other_unknown"
).alias("user_agent_type")

df.select("*", expr).show()

## +---+----------+---------------+
## | id|user_agent|user_agent_type|
## +---+----------+---------------+
## | 1| Android| mobile|
## | 2| Linux| desktop|
## | 3| Foo| other_unknown|
## +---+----------+---------------+

您还可以在单​​个选择中添加多个列:

exprs = [c.alias(a) for (a, c) in [
('etl_requests_usage', lit('DEV')),
('etl_datetime_local', current_date())]]

df.select("*", *exprs)

关于python - 如何使用 Apache Spark Dataframes 执行 Switch 语句 (Python),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36729616/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com