gpt4 book ai didi

python - Multiprocessing.pool 具有多个 args 和 kwargs 的函数

转载 作者:行者123 更新时间:2023-11-30 21:53:06 30 4
gpt4 key购买 nike

我想使用 mutliprocessing.pool 方法并行计算。问题是我想在计算中使用的函数提供了两个参数和可选的 kwargs,第一个参数是数据帧,第二个参数是 str,任何 kwargs 都是字典。

我想要使用的数据帧和字典对于我尝试执行的所有计算都是相同的,只是第二个参数不断变化。因此,我希望能够使用 map 方法将其作为不同字符串的列表传递给带有 df 和 dict 的已打包函数。

from utils import *
import multiprocessing
from functools import partial



def sumifs(df, result_col, **kwargs):

compare_cols = list(kwargs.keys())
operators = {}
for col in compare_cols:
if type(kwargs[col]) == tuple:
operators[col] = kwargs[col][0]
kwargs[col] = list(kwargs[col][1])
else:
operators[col] = operator.eq
kwargs[col] = list(kwargs[col])
result = []
cache = {}
# Go through each value
for i in range(len(kwargs[compare_cols[0]])):
compare_values = [kwargs[col][i] for col in compare_cols]
cache_key = ','.join([str(s) for s in compare_values])
if (cache_key in cache):
entry = cache[cache_key]
else:
df_copy = df.copy()
for compare_col, compare_value in zip(compare_cols, compare_values):
df_copy = df_copy.loc[operators[compare_col](df_copy[compare_col], compare_value)]
entry = df_copy[result_col].sum()
cache[cache_key] = entry
result.append(entry)
return pd.Series(result)

if __name__ == '__main__':

ca = read_in_table('Tab1')
total_consumer_ids = len(ca)

base = pd.DataFrame()
base['ID'] = range(1, total_consumer_ids + 1)


result_col= ['A', 'B', 'C']
keywords = {'Z': base['Consumer archetype ID']}

max_number_processes = multiprocessing.cpu_count()
with multiprocessing.Pool(processes=max_number_processes) as pool:
results = pool.map(partial(sumifs, a=ca, kwargs=keywords), result_col)
print(results)

但是,当我运行上面的代码时,出现以下错误:TypeError: sumifs() 缺少 1 个必需的位置参数:'result_col'。如何为函数提供第一个 arg 和 kwargs,同时提供第二个参数作为 str 列表,以便我可以并行计算?我在论坛上读过几个类似的问题,但似乎没有一个解决方案适用于这种情况......

谢谢您,如果有不清楚的地方,我深表歉意,我今天刚刚了解到多处理包!

最佳答案

让我们看一下代码的两部分。

首先是 sumifs 函数声明:

def sumifs(df, result_col, **kwargs):

其次,使用相关参数调用该函数。

# Those are the params
ca = read_in_table('Tab1')
keywords = {'Z': base['Consumer archetype ID']}

# This is the function call
results = pool.map(partial(sumifs, a=ca, kwargs=keywords), tasks)
<小时/>

更新 1:

编辑原始代码后,看起来问题出在位置参数赋值上,尝试丢弃它。

替换行:

results = pool.map(partial(sumifs, a=ca, kwargs=keywords), result_col)

与:

results = pool.map(partial(sumifs, ca, **keywords), result_col)

示例代码:

import multiprocessing
from functools import partial

def test_func(arg1, arg2, **kwargs):
print(arg1)
print(arg2)
print(kwargs)
return arg2

if __name__ == '__main__':
list_of_args2 = [1, 2, 3]
just_a_dict = {'key1': 'Some value'}
with multiprocessing.Pool(processes=3) as pool:
results = pool.map(partial(test_func, 'This is arg1', **just_a_dict), list_of_args2)
print(results)

将输出:

This is arg1
1
{'key1': 'Some value'}
This is arg1
2
{'key1': 'Some value'}
This is arg1
2
{'key1': 'Some value'}
['1', '2', '3']

有关如何 Multiprocessing.pool with a function that has multiple args and kwargs 的更多示例

<小时/>

更新 2:

扩展示例(由于评论):

I wonder however, in the same fashion, if my function had three args and kwargs, and I wanted to keep arg1, arg3 and kwargs costant, how could I pass arg2 as a list for multiprocessing? In essence, how will I inidicate multiprocessing that map(partial(test_func, 'This is arg1', 'This would be arg3', **just_a_dict), arg2) the second value in partial corresponds to arg3 and not arg2?

更新 1 代码将进行如下更改:

# The function signature
def test_func(arg1, arg2, arg3, **kwargs):

# The map call
pool.map(partial(test_func, 'This is arg1', arg3='This is arg3', **just_a_dict), list_of_args2)

这可以使用 python 位置和关键字分配来完成。请注意,kwargs 被保留在一边,并且不使用关键字进行分配,尽管事实上它位于关键字分配的值之后。

有关参数分配差异的更多信息可以找到 here .

关于python - Multiprocessing.pool 具有多个 args 和 kwargs 的函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59756754/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com