gpt4 book ai didi

python - pandas 内存消耗 hdf 文件分组

转载 作者:太空宇宙 更新时间:2023-11-04 05:23:14 25 4
gpt4 key购买 nike

我写了下面的脚本,但是我有一个内存消耗的问题,pandas 分配了超过 30 G 的内存,其中数据文件的总和大约是 18 G

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import time


mean_wo = pd.DataFrame()
mean_w = pd.DataFrame()
std_w = pd.DataFrame()
std_wo = pd.DataFrame()

start_time=time.time() #taking current time as starting time

data_files=['2012.h5','2013.h5','2014.h5','2015.h5', '2016.h5', '2008_2011.h5']



for data_file in data_files:
print data_file
df = pd.read_hdf(data_file)
grouped = df.groupby('day')
mean_wo_tmp=grouped['Significance_without_muons'].agg([np.mean])
mean_w_tmp=grouped['Significance_with_muons'].agg([np.mean])
std_wo_tmp=grouped['Significance_without_muons'].agg([np.std])
std_w_tmp=grouped['Significance_with_muons'].agg([np.std])
mean_wo = pd.concat([mean_wo, mean_wo_tmp])
mean_w = pd.concat([mean_w, mean_w_tmp])
std_w = pd.concat([std_w,std_w_tmp])
std_wo = pd.concat([std_wo,std_wo_tmp])
print mean_wo.info()
print mean_w.info()
del df, grouped, mean_wo_tmp, mean_w_tmp, std_w_tmp, std_wo_tmp

std_wo=std_wo.reset_index()
std_w=std_w.reset_index()
mean_wo=mean_wo.reset_index()
mean_w=mean_w.reset_index()

#setting the field day as date
std_wo['day']= pd.to_datetime(std_wo['day'], format='%Y-%m-%d')
std_w['day']= pd.to_datetime(std_w['day'], format='%Y-%m-%d')
mean_w['day']= pd.to_datetime(mean_w['day'], format='%Y-%m-%d')
mean_wo['day']= pd.to_datetime(mean_w['day'], format='%Y-%m-%d')

那么有人知道如何减少内存消耗吗?

干杯,

最佳答案

我会做这样的事
解决方案

data_files=['2012.h5', '2013.h5', '2014.h5', '2015.h5', '2016.h5', '2008_2011.h5'] 
cols = ['Significance_without_muons', 'Significance_with_muons']

def agg(data_file):
return pd.read_hdf(data_file).groupby('day')[cols].agg(['mean', 'std'])

big_df = pd.concat([agg(fn) for fn in data_files], axis=1, keys=data_files)

mean_wo_tmp = big_df.xs(('Significance_without_muons', 'mean'), axis=1, level=[1, 2])
mean_w_tmp = big_df.xs(('Significance_with_muons', 'mean'), axis=1, level=[1, 2])
std_wo_tmp = big_df.xs(('Significance_without_muons', 'std'), axis=1, level=[1, 2])
std_w_tmp = big_df.xs(('Significance_with_muons', 'mean'), axis=1, level=[1, 2])

del big_df

设置

data_files=['2012.h5', '2013.h5', '2014.h5', '2015.h5', '2016.h5', '2008_2011.h5'] 
cols = ['Significance_without_muons', 'Significance_with_muons']

np.random.seed([3,1415])
data_df = pd.DataFrame(np.random.rand(1000, 2), columns=cols)
data_df['day'] = np.random.choice(list('ABCDEFG'), 1000)

for fn in data_files:
data_df.to_hdf(fn, 'day', append=False)

运行上述解决方案
然后

mean_wo_tmp

enter image description here

关于python - pandas 内存消耗 hdf 文件分组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39611197/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com