gpt4 book ai didi

python - 用值填充非常大的数据框的快速方法

转载 作者:行者123 更新时间:2023-11-28 20:36:06 26 4
gpt4 key购买 nike

我有一个非常大的数据框,它有 100 年的日期作为列标题(即 ~36500 列)和 100 年的日期作为索引(即 ~36500 行)。我有一个函数可以计算数据帧的每个元素的值,需要运行 36500^2 次。

好的,问题不在于函数速度太快,而在于将值分配给数据框。即使我以这种方式分配常量,每 6 次分配也需要大约 1 秒。显然我很厚,你可以看出:

for i, row in df_mBase.iterrows():
for idx, val in enumerate(row):
df_mBase.ix[i][idx] = 1
print(i)

通常在 C/Java 中,我会简单地循环 36500x36500 双循环并通过索引直接访问预先分配的内存,这可以在恒定时间内实现,几乎没有开销。但这似乎不是 python 中的一个选项?

将此数据存储在数据框中最快的方法是什么?不管是否是 Pythonian,我只追求速度 - 我不在乎优雅。

最佳答案

这可能会变慢有几个原因

.ix

.ix 是一个神奇的类型索引器,它可以同时做标签和位置索引,但是会是deprecated。对于基于标签的更严格的 .loc 和基于索引的 .iloc。我假设 .ix 在幕后做了很多魔术来确定是否需要基于标签或基于位置的索引

.iterrows

为每一行返回一个(新的?)Series。基于列的迭代可能更快,因为 .iteritems 迭代列

[][]

df_mBase.ix[i][idx] 返回一个 Series,然后从中获取元素 idx,并为其赋值1.

df_mBase.loc[i, idx] = 1

应该改进这个

基准测试

import pandas as pd

import itertools
import timeit


def generate_dummy_data(years=1):
period = pd.Timedelta(365 * years, unit='D')

start = pd.Timestamp('19000101')
offset = pd.Timedelta(10, unit='h')

dates1 = pd.DatetimeIndex(start=start, end=start + period, freq='d')
dates2 = pd.DatetimeIndex(start=start + offset, end=start + offset + period, freq='d')

return pd.DataFrame(index=dates1, columns=dates2, dtype=float)


def assign_original(df_orig):
df_new = df_orig.copy(deep=True)
for i, row in df_new.iterrows():
for idx, val in enumerate(row):
df_new.ix[i][idx] = 1
return df_new


def assign_other(df_orig):
df_new = df_orig.copy(deep=True)
for (i, idx_i), (j, idx_j) in itertools.product(enumerate(df_new.index), enumerate(df_new.columns)):
df_new[idx_j][idx_i] = 1
return df_new


def assign_loc(df_orig):
df_new = df_orig.copy(deep=True)
for i, row in df_new.iterrows():
for idx, val in enumerate(row):
df_new.loc[i][idx] = 1
return df_new


def assign_loc_product(df_orig):
df_new = df_orig.copy(deep=True)
for i, j in itertools.product(df_new.index, df_new.columns):
df_new.loc[i, j] = 1
return df_new


def assign_iloc_product(df_orig):
df_new = df_orig.copy(deep=True)
for (i, idx_i), (j, idx_j) in itertools.product(enumerate(df_new.index), enumerate(df_new.columns)):
df_new.iloc[i, j] = 1
return df_new


def assign_iloc_product_range(df_orig):
df_new = df_orig.copy(deep=True)
for i, j in itertools.product(range(len(df_new.index)), range(len(df_new.columns))):
df_new.iloc[i, j] = 1
return df_new


def assign_index(df_orig):
df_new = df_orig.copy(deep=True)
for (i, idx_i), (j, idx_j) in itertools.product(enumerate(df_new.index), enumerate(df_new.columns)):
df_new[idx_j][idx_i] = 1
return df_new


def assign_column(df_orig):
df_new = df_orig.copy(deep=True)
for c, column in df_new.iteritems():
for idx, val in enumerate(column):
df_new[c][idx] = 1
return df_new


def assign_column2(df_orig):
df_new = df_orig.copy(deep=True)
for c, column in df_new.iteritems():
for idx, val in enumerate(column):
column[idx] = 1
return df_new


def assign_itertuples(df_orig):
df_new = df_orig.copy(deep=True)
for i, row in enumerate(df_new.itertuples()):
for idx, val in enumerate(row[1:]):
df_new.iloc[i, idx] = 1
return df_new


def assign_applymap(df_orig):
df_new = df_orig.copy(deep=True)
df_new = df_new.applymap(lambda x: 1)
return df_new


def assign_vectorized(df_orig):
df_new = df_orig.copy(deep=True)
for i in df_new:
df_new[i] = 1
return df_new


methods = [
('assign_original', assign_original),
('assign_loc', assign_loc),
('assign_loc_product', assign_loc_product),
('assign_iloc_product', assign_iloc_product),
('assign_iloc_product_range', assign_iloc_product_range),
('assign_index', assign_index),
('assign_column', assign_column),
('assign_column2', assign_column2),
('assign_itertuples', assign_itertuples),
('assign_vectorized', assign_vectorized),
('assign_applymap', assign_applymap),
]


def get_timings(period=1, methods=()):
print('=' * 10)
print(f'generating timings for a period of {period} years')
df_orig = generate_dummy_data(period)
df_orig.info(verbose=False)
repeats = 1
for method_name, method in methods:
result = pd.DataFrame()

def my_method():
"""
This looks a bit icky, but is the best way I found to make sure the values are really changed,
and not just on a copy of a DataFrame
"""
nonlocal result
result = method(df_orig)

t = timeit.Timer(my_method).timeit(number=repeats)

assert result.iloc[3, 3] == 1

print(f'{method_name} took {t / repeats} seconds')
yield (method_name, {'time': t, 'memory': result.memory_usage(deep=True).sum()/1024})


periods = [0.03, 0.1, 0.3, 1, 3]


results = {period: dict(get_timings(period, methods)) for period in periods}

print(results)

timings_dict = {period: {k: v['time'] for k, v in result.items()} for period, result in results.items()}

df = pd.DataFrame.from_dict(timings_dict)
df.transpose().plot(logy=True).figure.savefig('test.png')
                              0.03        0.1         0.3         1.0         3.0
assign_applymap 0.001989 0.009862 0.018018 0.105569 0.549511
assign_vectorized 0.002974 0.008428 0.035994 0.162565 3.810138
assign_index 0.013717 0.137134 1.288852 14.190128 111.102662
assign_column2 0.026260 0.186588 1.664345 19.204453 143.103077
assign_column 0.016811 0.212158 1.838733 21.053627 153.827845
assign_itertuples 0.025130 0.249886 2.125968 24.639593 185.975111
assign_iloc_product_range 0.026982 0.247069 2.199019 23.902244 186.548500
assign_iloc_product 0.021225 0.233454 2.437183 25.143673 218.849143
assign_loc_product 0.018743 0.290104 2.515379 32.778794 258.244436
assign_loc 0.029050 0.349551 2.822797 32.087433 294.052933
assign_original 0.034315 0.337207 2.714154 30.361072 332.327008

结论

timing plot

如果您可以使用矢量化,那就去做吧。根据计算,您可以使用另一种方法。如果您只需要使用的值,applymap 似乎最快。如果您也需要索引和/或列,请使用列

如果你不能向量化,df[column][index] = x 工作最快,用 df.iteritems() 迭代列作为结束第二

关于python - 用值填充非常大的数据框的快速方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45795274/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com