gpt4 book ai didi

python - for 循环中的并行函数

转载 作者:行者123 更新时间:2023-12-01 08:20:09 26 4
gpt4 key购买 nike

我有一个想要并行化的函数。

import multiprocessing as mp
from pathos.multiprocessing import ProcessingPool as Pool

cores=mp.cpu_count()

# create the multiprocessing pool
pool = Pool(cores)

def clean_preprocess(text):
"""
Given a string of text, the function:
1. Remove all punctuations and numbers and converts texts to lower case
2. Handles negation words defined above.
3. Tokenies words that are of more than length 1
"""
cores=mp.cpu_count()
pool = Pool(cores)
lower = re.sub(r'[^a-zA-Z\s\']', "", text).lower()
lower_neg_handled = n_pattern.sub(lambda x: n_dict[x.group()], lower)
letters_only = re.sub(r'[^a-zA-Z\s]', "", lower_neg_handled)
words = [i for i in tok.tokenize(letters_only) if len(i) > 1] ##parallelize this?
return (' '.join(words))

我一直在阅读有关多处理的文档,但对于如何适本地并行化我的函数仍然有点困惑。如果有人能指出我并行化像我这样的函数的正确方向,我将不胜感激。

最佳答案

在您的函数中,您可以决定通过将文本拆分为子部分来进行并行化,将标记化应用于子部分,然后连接结果。

大致如下:

text0 = text[:len(text)/2]
text1 = text[len(text)/2:]

然后将处理应用到这两个部分,使用:

# here, I suppose that clean_preprocess is the sequential version, 
# and we manage the pool outside of it
with Pool(2) as p:
words0, words1 = pool.map(clean_preprocess, [text0, text1])
words = words1 + words2
# or continue with words0 words1 to save the cost of joining the lists

但是,您的函数似乎受内存限制,因此它不会有可怕的加速(通常因子 2 是我们现在在标准计算机上可以期望的最大值),请参见例如How much does parallelization help the performance if the program is memory-bound?What do the terms "CPU bound" and "I/O bound" mean?

因此,您可以尝试将文本拆分为两部分以上,但可能不会更快。您甚至可能会得到令人失望的性能,因为分割文本可能比处理它的成本更高。

关于python - for 循环中的并行函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54699149/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com