gpt4 book ai didi

python - 如何有效地加入/合并/连接 Pandas 中的大型数据框?

转载 作者:太空狗 更新时间:2023-10-30 01:32:08 25 4
gpt4 key购买 nike

目标是创建一个大数据框架,我可以在该框架上执行操作,例如跨列平均每一行等。

问题是随着数据帧的增加,每次迭代的速度也会增加,所以我无法完成计算。

注释:我的 df 只有两列,其中 col1 是不必要的,因此我加入它。 col1 是一个字符串,col2 是一个 float 。行数为 3k。下面是一个例子:

folder_paths    float
folder/Path 1.12630137
folder/Path2 1.067517426
folder/Path3 1.06443264
folder/Path4 1.049119625
folder/Path5 1.039635769

问题 关于如何提高这段代码的效率以及瓶颈在哪里有什么想法吗?另外,我不确定 merge 是否可行。

当前想法 我正在考虑的一个解决方案是按分配内存并指定列类型:col1 是一个字符串,col2 是一个 float 。

df = pd.DataFrame() # create an empty data frame

for i in range(1000):
if i is 0:
df = generate_new_df(arg1, arg2)
else:
df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')

我也尝试过使用 pd.concat,但结果非常相似:每次迭代后时间增加

df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)

pd.concat 的结果

run 1
time 0.34s
run 2
time 0.34s
run 3
time 0.32s
run 4
time 0.33s
run 5
time 0.42s
run 6
time 0.41s
run 7
time 0.45s
run 8
time 0.46s
run 9
time 0.54s
run 10
time 0.58s
run 11
time 0.73s
run 12
time 0.72s
run 13
time 0.79s
run 14
time 0.87s
run 15
time 0.95s
run 16
time 1.06s
run 17
time 1.19s
run 18
time 1.24s
run 19
time 1.37s
run 20
time 1.57s
run 21
time 1.68s
run 22
time 1.93s
run 23
time 1.86s
run 24
time 1.96s
run 25
time 2.11s
run 26
time 2.32s
run 27
time 2.42s
run 28
time 2.57s

使用列表的 dfListpd.concat 产生了相似的结果。下面是代码和结果。

dfList=[]
for i in range(1000):
dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)

结果:

run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.

最佳答案

仍然有点不清楚你的问题到底是什么,但我假设主要瓶颈是你试图一次将大量数据帧加载到列表中并且你正在运行内存/分页问题。考虑到这一点,这里有一种方法可能会有所帮助,但您必须自己测试它,因为我无权访问您的 generate_new_df 函数或您的数据。

该方法是使用来自 this answermerge_with_concat 函数的变体。 ,并最初将较少数量的数据框合并在一起,然后一次将它们全部合并在一起。

例如,如果您有 1000 个数据帧,您可以一次将 100 个合并在一起,得到 10 个大数据帧,然后在最后一步将最后的 10 个数据帧合并在一起。这应该确保您没有在任何时候都太大的数据帧列表。

您可以使用下面的两个函数(我假设您的 generate_new_df 函数将文件名作为其参数之一)并执行如下操作:

def chunk_dfs(file_names, chunk_size):
"""" yields n dataframes at a time where n == chunksize """
dfs = []
for f in file_names:
dfs.append(generate_new_df(f))
if len(dfs) == chunk_size:
yield dfs
dfs = []
if dfs:
yield dfs


def merge_with_concat(dfs, col):
dfs = (df.set_index(col, drop=True) for df in dfs)
merged = pd.concat(dfs, axis=1, join='outer', copy=False)
return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)

关于python - 如何有效地加入/合并/连接 Pandas 中的大型数据框?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45217120/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com