gpt4 book ai didi

python - 将 Pandas DataFrame 转换为 Spark DataFrame

转载 作者:太空狗 更新时间:2023-10-29 21:59:21 28 4
gpt4 key购买 nike

我之前问过一个关于如何 Convert scipy sparse matrix to pyspark.sql.dataframe.DataFrame 的问题,并在阅读提供的答案以及 this article 后取得了一些进展.我最终找到了以下用于将 scipy.sparse.csc_matrix 转换为 pandas 数据帧的代码:

df = pd.DataFrame(csc_mat.todense()).to_sparse(fill_value=0)
df.columns = header

然后我尝试使用建议的语法将 pandas 数据帧转换为 spark 数据帧:

spark_df = sqlContext.createDataFrame(df)

但是,我返回以下错误:

ValueError: cannot create an RDD from type: <type 'list'>

我不认为它与 sqlContext 有任何关系,因为我能够将另一个大小大致相同的 pandas 数据帧转换为 spark 数据帧,没问题。有什么想法吗?

最佳答案

我不确定这个问题是否仍然与当前版本的 pySpark 相关,但这是我在发布这个问题几周后得出的解决方案。该代码相当丑陋并且可能效率低下,但由于对这个问题的持续兴趣,我将其发布在这里。:

from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark import SparkConf
from py4j.protocol import Py4JJavaError

myConf = SparkConf(loadDefaults=True)
sc = SparkContext(conf=myConf)
hc = HiveContext(sc)


def chunks(lst, k):
"""Yield k chunks of close to equal size"""
n = len(lst) / k
for i in range(0, len(lst), n):
yield lst[i: i + n]


def reconstruct_rdd(lst, num_parts):
partitions = chunks(lst, num_parts)
for part in range(0, num_parts - 1):
print "Partition ", part, " started..."
partition = next(partitions) # partition is a list of lists
if part == 0:
prime_rdd = sc.parallelize(partition)
else:
second_rdd = sc.parallelize(partition)
prime_rdd = prime_rdd.union(second_rdd)
print "Partition ", part, " complete!"
return prime_rdd


def build_col_name_list(len_cols):
name_lst = []
for i in range(1, len_cols):
idx = "_" + str(i)
name_lst.append(idx)
return name_lst


def set_spark_df_header(header, sdf):
oldColumns = build_col_name_lst(len(sdf.columns))
newColumns = header
sdf = reduce(lambda sdf, idx: sdf.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), sdf)
return sdf


def convert_pdf_matrix_to_sdf(pdf, sdf_header, num_of_parts):
try:
sdf = hc.createDataFrame(pdf)
except ValueError:
lst = pdf.values.tolist() #Need to convert to list of list to parallelize
try:
rdd = sc.parallelize(lst)
except Py4JJavaError:
rdd = reconstruct_rdd(lst, num_of_parts)
sdf = hc.createDataFrame(rdd)
sdf = set_spark_df_header(sdf_header, sdf)
return sdf

关于python - 将 Pandas DataFrame 转换为 Spark DataFrame,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40411871/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com