gpt4 book ai didi

python - 在一百万条记录上使用 pandas group by 的有效方法

转载 作者:行者123 更新时间:2023-12-01 07:29:02 24 4
gpt4 key购买 nike

我有一个可以使用下面的代码生成的数据框

df2 = pd.DataFrame({'subject_ID':[1,1,1,1,1,1,2,2,2,2],'colum' : ['L1CreaDate','L1Crea','L2CreaDate','L2Crea','L3CreaDate','L3Crea','L1CreaDate','L1Crea','L2CreaDate','L2Crea'], 
'dates':['2016-10-30 00:00:00',2.3,'2016-10-30 00:00:00',2.5,np.nan,np.nan,'2016-10-30 00:00:00',12.3,'2016-10-30 00:00:00',12.3]})

我正在尝试对上述数据框执行以下操作。虽然代码工作得很好,但问题是当我使用 group by 语句 时。在示例数据帧中速度很快,但在超过 100 万条记录的实际数据中,需要一段时间,而且运行时间很长

    df2['col2'] = df2['colum'].str.split("Date").str[0]
df2['col3'] = df2['col2'].str.extract('(\d+)', expand=True).astype(int)
df2 = df2.sort_values(by=['subject_ID','col3'])
df2['count'] = df2.groupby(['subject_ID','col2'])['dates'].transform(pd.Series.count)

我执行 groupby 来获取以下输出 count 列,以便我可以拒绝 count 为 0 的记录。放弃 NA 背后有一个逻辑。这不仅仅是放弃所有 NA。如果您想了解这一点,请参阅这篇文章retain few NA's and drop rest of the NA's logic

在真实数据中,一个人可能拥有超过 10000 行。所以单个数据帧有超过 100 万行。

还有其他更好、更有效的方法来执行groupby或获取count列吗?

enter image description here

最佳答案

想法是使用列表理解和split来提高性能,然后不将输出分配给新列count,而是使用提取的整数进行过滤和最后排序:

df2['col2'] = [x.split("Date")[0] for x in df2['colum']]
df2 = df2[df2.groupby(['subject_ID','col2'])['dates'].transform('count').ne(0)].copy()

df2['col3'] = df2['col2'].str.extract('(\d+)', expand=True).astype(int)
df2 = df2.sort_values(by=['subject_ID','col3'])
print (df2)
subject_ID colum dates col2 col3
0 1 L1CreaDate 2016-10-30 00:00:00 L1Crea 1
1 1 L1Crea 2.3 L1Crea 1
2 1 L2CreaDate 2016-10-30 00:00:00 L2Crea 2
3 1 L2Crea 2.5 L2Crea 2
6 2 L1CreaDate 2016-10-30 00:00:00 L1Crea 1
7 2 L1Crea 12.3 L1Crea 1
8 2 L2CreaDate 2016-10-30 00:00:00 L2Crea 2
9 2 L2Crea 12.3 L2Crea 2

如果出现错误:

AttributeError: 'float' object has no attribute 'split'

这意味着可能存在缺失值,因此应该更改列表理解:

df2['col2'] = [x.split("Date")[0] if x == x else np.nan for x in df2['colum']]

检查性能:

def new(df2):
df2['col2'] = [x.split("Date")[0] for x in df2['colum']]
df2 = df2[df2.groupby(['subject_ID','col2'])['dates'].transform('count').ne(0)].copy()
df2['col3'] = df2['col2'].str.extract('(\d+)', expand=True).astype(int)
return df2.sort_values(by=['subject_ID','col3'])


def orig(df2):
df2['col2'] = df2['colum'].str.split("Date").str[0]
df2['col3'] = df2['col2'].str.extract('(\d+)', expand=True).astype(int)
df2 = df2.sort_values(by=['subject_ID','col3'])
df2['count'] = df2.groupby(['subject_ID','col2'])['dates'].transform(pd.Series.count)
return df2[df2['count'].ne(0)]

In [195]: %timeit (orig(df2))
10.8 ms ± 728 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [196]: %timeit (new(df2))
6.11 ms ± 144 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

关于python - 在一百万条记录上使用 pandas group by 的有效方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57305101/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com