gpt4 book ai didi

python - 转换数据帧 : several columns to single by order

转载 作者:太空宇宙 更新时间:2023-11-04 00:31:56 25 4
gpt4 key购买 nike

我正在使用 Spark 2.1.1 和数据框。这是我的输入数据框:

+----+---------+---------+-------+
| key|parameter|reference| subkey|
+----+---------+---------+-------+
|key1| 45| 10|subkey1|
|key1| 45| 20|subkey2|
|key2| 70| 40|subkey2|
|key2| 70| 30|subkey1|
+----+---------+---------+-------+

我需要将数据框转换为下一个:

result data (by pandas):
+-----+-----------+
|label| features|
+-----+-----------+
| 45|[10.0,20.0]|
| 70|[30.0,40.0]|
+-----+-----------+

我可以在 pandas 的帮助下进行转换:

def convert_to_flat_by_pandas(df):
pandas_data_frame = df.toPandas()
all_keys = pandas_data_frame['key'].unique()

flat_values = []
for key in all_keys:
key_rows = pandas_data_frame.loc[pandas_data_frame['key'] == key]
key_rows = key_rows.sort_values(by=['subkey'])

parameter_values = key_rows['parameter']
parameter_value = parameter_values.real[0]

key_reference_value = [reference_values for reference_values in key_rows['reference']]

flat_values.append((parameter_value, key_reference_value))

loaded_data = [(label, Vectors.dense(features)) for (label, features) in flat_values]
spark_df = spark.createDataFrame(loaded_data, ["label", "features"])

return spark_df

看来,我需要使用 GroupBy,但我不明白如何将组(多行)排序和转换为单行。

工作样本的来源(在 pandas 的帮助下):https://github.com/constructor-igor/TechSugar/blob/master/pythonSamples/pysparkSamples/df_flat.py

在 2 个答案的帮助下,我得到了 2 个可能的解决方案:

UPD1 解决方案#1

def convert_to_flat_by_sparkpy(df):
subkeys = df.select("subkey").dropDuplicates().collect()
subkeys = [s[0] for s in subkeys]
print('subkeys: ', subkeys)
assembler = VectorAssembler().setInputCols(subkeys).setOutputCol("features")
spark_df = assembler.transform(df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference"))))
spark_df = spark_df.withColumnRenamed("parameter", "label")
spark_df = spark_df.select("label", "features")
return spark_df

UPD1 解决方案 #2

def convert_to_flat_by_sparkpy_v2(df):
spark_df = df.orderBy("subkey")
spark_df = spark_df.groupBy("key").agg(first(col("parameter")).alias("label"), collect_list("reference").alias("features"))
spark_df = spark_df.select("label", "features")
return spark_df

最佳答案

对于您提供的有限样本数据,您可以将数据框转换为以子键为标题的宽格式,然后使用VectorAssembler将它们收集为特征:

from pyspark.sql.functions import first, col
from pyspark.ml.feature import VectorAssembler

assembler = VectorAssembler().setInputCols(["subkey1", "subkey2"]).setOutputCol("features")

assembler.transform(
df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference")))
).show()
+----+---------+-------+-------+-----------+
| key|parameter|subkey1|subkey2| features|
+----+---------+-------+-------+-----------+
|key1| 45| 10| 20|[10.0,20.0]|
|key2| 70| 30| 40|[30.0,40.0]|
+----+---------+-------+-------+-----------+

动态子键的更新:

假设您有这样的数据框:

df.show()
+----+---------+---------+-------+
| key|parameter|reference| subkey|
+----+---------+---------+-------+
|key1| 45| 10|subkey1|
|key1| 45| 20|subkey2|
|key2| 70| 40|subkey2|
|key2| 70| 30|subkey1|
|key2| 70| 70|subkey3|
+----+---------+---------+-------+

首先收集所有唯一的子键,然后使用子键创建汇编器:

subkeys = df.select("subkey").dropDuplicates().rdd.map(lambda r: r[0]).collect()
assembler = VectorAssembler().setInputCols(subkeys).setOutputCol("features")

assembler.transform(
df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference"))).na.fill(0)
).show()
+----+---------+-------+-------+-------+----------------+
| key|parameter|subkey1|subkey2|subkey3| features|
+----+---------+-------+-------+-------+----------------+
|key1| 45| 10| 20| 0| [20.0,10.0,0.0]|
|key2| 70| 30| 40| 70|[40.0,30.0,70.0]|
+----+---------+-------+-------+-------+----------------+

关于python - 转换数据帧 : several columns to single by order,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45489237/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com