gpt4 book ai didi

python - Dask中GroupBy使用自定义聚合函数构造模式及对应计数函数

转载 作者:太空狗 更新时间:2023-10-30 01:26:26 25 4
gpt4 key购买 nike

因此 dask 现在已经更新为支持 groupby 的自定义聚合函数。 (感谢开发团队和@chmp 的努力!)。我目前正在尝试构造一个模式函数和相应的计数函数。基本上我设想的是模式为每个分组返回一个列表,该列表包含特定列(即 [4、1、2])的最常见值。此外,还有一个相应的计数函数返回这些值的实例数,即。 3.

现在我正在尝试用代码实现它。根据 groupby.py 文件,自定义聚合的参数如下:

Parameters
----------
name : str
the name of the aggregation. It should be unique, since intermediate
result will be identified by this name.
chunk : callable
a function that will be called with the grouped column of each
partition. It can either return a single series or a tuple of series.
The index has to be equal to the groups.
agg : callable
a function that will be called to aggregate the results of each chunk.
Again the argument(s) will be grouped series. If ``chunk`` returned a
tuple, ``agg`` will be called with all of them as individual positional
arguments.
finalize : callable
an optional finalizer that will be called with the results from the
aggregation.

这里是提供的均值代码:

    custom_mean = dd.Aggregation(
'custom_mean',
lambda s: (s.count(), s.sum()),
lambda count, sum: (count.sum(), sum.sum()),
lambda count, sum: sum / count,
)
df.groupby('g').agg(custom_mean)

我正在努力想出最好的方法来做到这一点。目前我有以下功能:

def custom_count(x):
count = Counter(x)
freq_list = count.values()
max_cnt = max(freq_list)
total = freq_list.count(max_cnt)
return count.most_common(total)

custom_mode = dd.Aggregation(
'custom_mode',
lambda s: custom_count(s),
lambda s1: s1.extend(),
lambda s2: ......
)

但是,我一直无法理解 agg 部分究竟应该如何工作。对此问题的任何帮助将不胜感激。

谢谢!

最佳答案

不可否认,目前的文档在细节上有些松散。感谢您提请我注意这个问题。如果这个答案有帮助,请现在告诉我,我将向 dask 提供文档的更新版本。

针对您的问题:对于单个返回值,聚合的不同步骤相当于:

res = chunk(df.groupby('g')['col'])
res = agg(res.groupby(level=[0]))
res = finalize(res)

在这些方面,模式功能可以实现如下:

def chunk(s):
# for the comments, assume only a single grouping column, the
# implementation can handle multiple group columns.
#
# s is a grouped series. value_counts creates a multi-series like
# (group, value): count
return s.value_counts()


def agg(s):
# s is a grouped multi-index series. In .apply the full sub-df will passed
# multi-index and all. Group on the value level and sum the counts. The
# result of the lambda function is a series. Therefore, the result of the
# apply is a multi-index series like (group, value): count
return s.apply(lambda s: s.groupby(level=-1).sum())

# faster version using pandas internals
s = s._selected_obj
return s.groupby(level=list(range(s.index.nlevels))).sum()


def finalize(s):
# s is a multi-index series of the form (group, value): count. First
# manually group on the group part of the index. The lambda will receive a
# sub-series with multi index. Next, drop the group part from the index.
# Finally, determine the index with the maximum value, i.e., the mode.
level = list(range(s.index.nlevels - 1))
return (
s.groupby(level=level)
.apply(lambda s: s.reset_index(level=level, drop=True).argmax())
)

mode = dd.Aggregation('mode', chunk, agg, finalize)

请注意,此实现与数据框 .mode 函数不匹配。如果出现平局,此版本将返回其中一个值,而不是所有值。

现在可以使用模式聚合

import pandas as pd
import dask.dataframe as dd

df = pd.DataFrame({
'col': [0, 1, 1, 2, 3] * 10,
'g0': [0, 0, 0, 1, 1] * 10,
'g1': [0, 0, 0, 1, 1] * 10,
})
ddf = dd.from_pandas(df, npartitions=10)

res = ddf.groupby(['g0', 'g1']).agg({'col': mode}).compute()
print(res)

关于python - Dask中GroupBy使用自定义聚合函数构造模式及对应计数函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46080171/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com