gpt4 book ai didi

apache-spark - 如何在pyspark中将列表列表合并为单个列表

转载 作者:行者123 更新时间:2023-12-04 04:48:34 24 4
gpt4 key购买 nike

在 spark 数据框中,我有 1 列包含列表作为行。我想将字符串列表合并为一个。

INPUT DATAFRAME:
+-------+--------------------+
| name |friends |
+-------+--------------------+
| Jim |[["C","A"]["B","C"]]|
+-------+--------------------+
| Bill |[["E","A"]["F","L"]]|
+-------+--------------------+
| Kim |[["C","K"]["L","G"]]|
+-------+--------------------+

OUTPUT DATAFRAME:

+-------+--------------------+
| name |friends |
+-------+--------------------+
| Jim |["C","A","B"] |
+-------+--------------------+
| Bill |["E","A","F","L"] |
+-------+--------------------+
| Kim |["C","K","L","G"] |
+-------+--------------------+

我想将列表列表合并为单个列表并删除重复项。
提前致谢

最佳答案

我认为你可以依靠 explode 的组合解构列表和 collect_set重建它们:

import pyspark
from pyspark.sql import SparkSession
from pyspark import SparkContext
import pandas as pd
from pyspark.sql import functions as F
from pyspark.sql import Window

sc = SparkContext.getOrCreate()
spark = SparkSession(sc)

columns = ['name', 'friends']
data = [("Jim", [["C","A"], ["B","C"]]), ("Bill", [["E","A"], ["F","L"]]), ("Kim", [["C","K"], ["L","G"]])]
pd_data = pd.DataFrame.from_records(data=data, columns=columns)
spark_data = spark.createDataFrame(pd_data)

first_explode = spark_data.withColumn("first_explode", F.explode((F.col("friends"))))
first_explode.show()

+----+----------------+-------------+
|name| friends|first_explode|
+----+----------------+-------------+
| Jim|[[C, A], [B, C]]| [C, A]|
| Jim|[[C, A], [B, C]]| [B, C]|
|Bill|[[E, A], [F, L]]| [E, A]|
|Bill|[[E, A], [F, L]]| [F, L]|
| Kim|[[C, K], [L, G]]| [C, K]|
| Kim|[[C, K], [L, G]]| [L, G]|
+----+----------------+-------------+

第一层解构。现在是第二个:
second_explode = first_explode.withColumn("second_explode", F.explode(F.col("first_explode")))
second_explode.show()

+----+----------------+-------------+--------------+
|name| friends|first_explode|second_explode|
+----+----------------+-------------+--------------+
| Jim|[[C, A], [B, C]]| [C, A]| C|
| Jim|[[C, A], [B, C]]| [C, A]| A|
| Jim|[[C, A], [B, C]]| [B, C]| B|
| Jim|[[C, A], [B, C]]| [B, C]| C|
|Bill|[[E, A], [F, L]]| [E, A]| E|
|Bill|[[E, A], [F, L]]| [E, A]| A|
|Bill|[[E, A], [F, L]]| [F, L]| F|
|Bill|[[E, A], [F, L]]| [F, L]| L|
| Kim|[[C, K], [L, G]]| [C, K]| C|
| Kim|[[C, K], [L, G]]| [C, K]| K|
| Kim|[[C, K], [L, G]]| [L, G]| L|
| Kim|[[C, K], [L, G]]| [L, G]| G|
+----+----------------+-------------+--------------+

重建列表,丢弃重复项:
grouped = second_explode.groupBy("name").agg(F.collect_set(F.col("second_explode")).alias("friends"))
grouped.show()

+----+------------+
|name| friends|
+----+------------+
| Jim| [C, B, A]|
|Bill|[F, E, A, L]|
| Kim|[K, C, G, L]|
+----+------------+

关于apache-spark - 如何在pyspark中将列表列表合并为单个列表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52104524/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com