gpt4 book ai didi

python - pyspark:Spark 2.3 中的 arrays_zip 等效项

转载 作者:行者123 更新时间:2023-12-03 09:36:26 24 4
gpt4 key购买 nike

arrays_zip的等价函数怎么写在 Spark 2.3 中?

来自 Spark 2.4 的源代码

def arrays_zip(*cols):
"""
Collection function: Returns a merged array of structs in which the N-th struct contains all
N-th values of input arrays.

:param cols: columns of arrays to be merged.

>>> from pyspark.sql.functions import arrays_zip
>>> df = spark.createDataFrame([(([1, 2, 3], [2, 3, 4]))], ['vals1', 'vals2'])
>>> df.select(arrays_zip(df.vals1, df.vals2).alias('zipped')).collect()
[Row(zipped=[Row(vals1=1, vals2=2), Row(vals1=2, vals2=3), Row(vals1=3, vals2=4)])]
"""
sc = SparkContext._active_spark_context
return Column(sc._jvm.functions.arrays_zip(_to_seq(sc, cols, _to_java_column)))

如何在 PySpark 中实现类似的目标?

最佳答案

您可以通过创建用户定义函数来实现这一点

import pyspark.sql.functions as f
import pyspark.sql.types as t

arrays_zip_ = f.udf(lambda x, y: list(zip(x, y)),
t.ArrayType(t.StructType([
# Choose Datatype according to requirement
t.StructField("first", t.IntegerType()),
t.StructField("second", t.StringType())
])))

df = spark.createDataFrame([(([1, 2, 3], ['2', '3', '4']))], ['first', 'second'])

现在结果为 Spark <=2.3
df.select(arrays_zip_('first', 'second').alias('zipped')).show(2,False)

+------------------------+
|zipped |
+------------------------+
|[[1, 2], [2, 3], [3, 4]]|
+------------------------+

结果是 Spark 2.4 版
df.select(f.arrays_zip('first', 'second').alias('zipped')).show(2,False)

+------------------------+
|zipped |
+------------------------+
|[[1, 2], [2, 3], [3, 4]]|
+------------------------+

关于python - pyspark:Spark 2.3 中的 arrays_zip 等效项,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61503929/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com