gpt4 book ai didi

python - 针对循环性能问题的 Pandas 重采样

转载 作者:行者123 更新时间:2023-12-01 09:00:38 24 4
gpt4 key购买 nike

我有以下数据框:

df = pd.DataFrame({"id": [0]*5 + [1]*5,
"time": ['2015-01-01', '2015-01-03', '2015-01-04', '2015-01-08', '2015-01-10', '2015-02-02', '2015-02-04', '2015-02-06', '2015-02-11', '2015-02-13'],
'hit': [0,3,8,2,5, 6,12,0,7,3]})
df.time = df.time.astype('datetime64[ns]')
df = df[['id', 'time', 'hit']]
df

输出:

    id        time  hit
0 0 2015-01-01 0
1 0 2015-01-03 3
2 0 2015-01-04 8
3 0 2015-01-08 2
4 0 2015-01-10 5
5 1 2015-02-02 6
6 1 2015-02-04 12
7 1 2015-02-06 0
8 1 2015-02-11 7
9 1 2015-02-13 3

以及执行重采样的函数:

def subset(df):
'''select first x rows'''
return df.iloc[:14]

def dailyCount(df, member_id, values, time):
'''Transform a time-series df into 7 daily count per group'''
# container for resulting dataframe
ts = pd.DataFrame()
for i in df.member_id.unique():
# prepare a series and upsample it within the same id
chunk = pd.Series(df.loc[df.member_id == i, values])
#print(chunk)
chunk = chunk.resample('1D').asfreq()

# create dataframe and construct some additional columns
chunk = pd.DataFrame(chunk, columns=[values]).reset_index().fillna(0)
chunk[values] = chunk[values].astype(int)
chunk[member_id] = i
chunk['daily_count'] = chunk.groupby(member_id).cumcount() + 1

# accumulate id-wise dataframes 1 by 1 vertically
ts = pd.concat([ts, chunk], axis=0, ignore_index=True)

ts = ts.set_index([member_id, time])
ts = ts.reset_index(level=0).groupby(member_id).apply(subset).drop(member_id, axis=1).reset_index().drop(time, axis=1).set_index([member_id,'daily_count']).unstack().fillna(0)
#ts = ts.reset_index().drop(columns=time).set_index([member_id,'daily_count']).unstack().fillna(0)
ts.columns = pd.Index(['dailyCount_' + e[0] + '_' + str(e[1]) for e in ts.columns.tolist()])
ts = ts.astype(np.int32)#.reset_index()
return ts

输入:

df.rename(columns={'id': 'member_id'}, inplace=True)
df = df.set_index('time')
dailyCount(df, 'member_id', 'hit', 'time')

输出:

    dailyCount_hit_1    dailyCount_hit_2    dailyCount_hit_3    dailyCount_hit_4    dailyCount_hit_5    dailyCount_hit_6    dailyCount_hit_7    dailyCount_hit_8    dailyCount_hit_9    dailyCount_hit_10   dailyCount_hit_11   dailyCount_hit_12
member_id
0 0 0 3 8 0 0 0 2 0 5 0 0
1 6 0 12 0 0 0 0 0 0 7 0 3

当我在大约 180,000 行的 DataFrame 上使用此函数时,在我的 2.3GHz i5 MacBookPro 上运行需要 6 分钟。我知道我的机器很慢,但我需要在各种数据集上重复使用这个函数。在这种情况下,有什么方法可以在不使用 For 循环的情况下执行相同的转换?

最佳答案

这是使用 pandas.date_range Index.reindexDataFrame.pivot_table 的另一个潜在解决方案:

df.rename(columns={'id': 'member_id'}, inplace=True)
df = df.set_index('time')
members = []

for _, g in df.groupby('member_id'):
dt_idx = pd.date_range(start=g.index.min(), end=g.index.max(), freq='D')
g = g.reindex(dt_idx).reset_index(drop=True)
members.append(g)

resampled_df = pd.concat(members)
resampled_df['member_id'].ffill(inplace=True)
resampled_df['hit'].fillna(0, inplace=True)
resampled_df.index += 1
resampled_df = (resampled_df.pivot_table(values='hit',
index='member_id',
columns=resampled_df.index,
fill_value=0)
.add_prefix('dailyCount_hit_'))
resampled_df.index = resampled_df.index.astype(int)
resampled_df.iloc[:, :14]

关于python - 针对循环性能问题的 Pandas 重采样,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52474035/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com