gpt4 book ai didi

Python:多处理的非常奇怪的行为;后面的代码导致前面代码的 "retroactive"减速

转载 作者:行者123 更新时间:2023-11-28 20:58:29 29 4
gpt4 key购买 nike

我正在尝试学习如何实现多处理来计算蒙特卡洛模拟。我从 this simple tutorial 复制了代码目的是计算积分。我还将它与 answer from WolframAlpha 进行了比较并计算误差。我的代码的第一部分没有问题,只是用来定义积分函数和声明一些常量:

import numpy as np
import multiprocessing as mp
import time

def integrate(iterations):
np.random.seed()
mc_sum = 0
chunks = 10000
chunk_size = int(iterations/chunks)

for i in range(chunks):
u = np.random.uniform(size=chunk_size)
mc_sum += np.sum(np.exp(-u * u))

normed = mc_sum / iterations
return normed

wolfram_answer = 0.746824132812427
mc_iterations = 1000000000

但是在接下来的两部分中会发生一些非常诡异的事情(我给它们贴上了标签,因为它很重要)。首先(标记为“BLOCK 1”),我在没有任何多处理的情况下进行模拟,只是为了获得一个基准。在此之后(标记为“BLOCK 2”),我做同样的事情,但有一个多处理步骤。如果您要重现此内容,您可能需要根据您的机器拥有的内核数调整 num_procs 变量:

#### BLOCK 1
single_before = time.time()
single = integrate(mc_iterations)
single_after = time.time()
single_duration = np.round(single_after - single_before, 3)
error_single = (wolfram_answer - single)/wolfram_answer

print(mc_iterations, "iterations on single-thread:",
single_duration, "seconds.")
print("Estimation error:", error_single)
print("")

#### BLOCK 2
if __name__ == "__main__":
num_procs = 8
multi_iterations = int(mc_iterations / num_procs)

multi_before = time.time()
pool = mp.Pool(processes = num_procs)

multi_result = pool.map(integrate, [multi_iterations]*num_procs)
multi_result = np.array(multi_result).mean()
multi_after = time.time()

multi_duration = np.round(multi_after - multi_before, 3)
error_multi = (wolfram_answer - multi_result)/wolfram_answer

print(num_procs, "threads with", multi_iterations, "iterations each:",
multi_duration, "seconds.")
print("Estimation error:", error_multi)

输出是:

1000000000 iterations on single-thread: 37.448 seconds.
Estimation error: 1.17978774235e-05

8 threads with 125000000 iterations each: 54.697 seconds.
Estimation error: -5.88380936901e-06

因此,多处理速度较慢。这并非闻所未闻;也许来自多处理的开销超过了并行化的 yield ?

但是,实际情况并非如此。观察当我只是注释掉第一个 block 时会发生什么:

#### BLOCK 1
##single_before = time.time()
##single = integrate(mc_iterations)
##single_after = time.time()
##single_duration = np.round(single_after - single_before, 3)
##error_single = (wolfram_answer - single)/wolfram_answer
##
##print(mc_iterations, "iterations on single-thread:",
## single_duration, "seconds.")
##print("Estimation error:", error_single)
##print("")

#### BLOCK 2
if __name__ == "__main__":
num_procs = 8
multi_iterations = int(mc_iterations / num_procs)

multi_before = time.time()
pool = mp.Pool(processes = num_procs)

multi_result = pool.map(integrate, [multi_iterations]*num_procs)
multi_result = np.array(multi_result).mean()
multi_after = time.time()

multi_duration = np.round(multi_after - multi_before, 3)
error_multi = (wolfram_answer - multi_result)/wolfram_answer

print(num_procs, "threads with", multi_iterations, "iterations each:",
multi_duration, "seconds.")
print("Estimation error:", error_multi)

输出是:

8 threads with 125000000 iterations each: 6.662 seconds.
Estimation error: 3.86063069069e-06

没错——完成多处理的时间从 55 秒减少到不到 7 秒!这还不是最奇怪的部分。观察当我将 Block 1 移到 Block 2 之后时会发生什么:

#### BLOCK 2
if __name__ == "__main__":
num_procs = 8
multi_iterations = int(mc_iterations / num_procs)

multi_before = time.time()
pool = mp.Pool(processes = num_procs)

multi_result = pool.map(integrate, [multi_iterations]*num_procs)
multi_result = np.array(multi_result).mean()
multi_after = time.time()

multi_duration = np.round(multi_after - multi_before, 3)
error_multi = (wolfram_answer - multi_result)/wolfram_answer

print(num_procs, "threads with", multi_iterations, "iterations each:",
multi_duration, "seconds.")
print("Estimation error:", error_multi)

#### BLOCK 1
single_before = time.time()
single = integrate(mc_iterations)
single_after = time.time()
single_duration = np.round(single_after - single_before, 3)
error_single = (wolfram_answer - single)/wolfram_answer

print(mc_iterations, "iterations on single-thread:",
single_duration, "seconds.")
print("Estimation error:", error_single)
print("")

输出是:

8 threads with 125000000 iterations each: 54.938 seconds.
Estimation error: 7.42415402896e-06
1000000000 iterations on single-thread: 37.396 seconds.
Estimation error: 9.79800494235e-06

我们又回到了缓慢的输出,这完全是疯了!难道 Python 不应该被解释吗?我知道该声明带有一百条警告,但我理所当然地认为代码是逐行执行的,因此之后的内容(函数、类等之外)不会影响之前的内容,因为它还没有被“看过”。

那么,多处理步骤结束后执行的内容如何追溯减慢多处理代码?

最后,快速行为仅通过缩进 block 1 恢复到 if __name__ == "__main__" block 内,因为它当然会:

#### BLOCK 2
if __name__ == "__main__":
num_procs = 8
multi_iterations = int(mc_iterations / num_procs)

multi_before = time.time()
pool = mp.Pool(processes = num_procs)

multi_result = pool.map(integrate, [multi_iterations]*num_procs)
multi_result = np.array(multi_result).mean()
multi_after = time.time()

multi_duration = np.round(multi_after - multi_before, 3)
error_multi = (wolfram_answer - multi_result)/wolfram_answer

print(num_procs, "threads with", multi_iterations, "iterations each:",
multi_duration, "seconds.")
print("Estimation error:", error_multi)

#### BLOCK 1
single_before = time.time()
single = integrate(mc_iterations)
single_after = time.time()
single_duration = np.round(single_after - single_before, 3)
error_single = (wolfram_answer - single)/wolfram_answer

print(mc_iterations, "iterations on single-thread:",
single_duration, "seconds.")
print("Estimation error:", error_single)
print("")

输出是:

8 threads with 125000000 iterations each: 7.293 seconds.
Estimation error: 1.10350027622e-05
1000000000 iterations on single-thread: 31.035 seconds.
Estimation error: 2.53582945763e-05

如果将 block 1 保留在 if block 中,但将其移动到定义 num_procs 的上方(此处未显示,因为这个问题已经很长了)。

那么,究竟是什么导致了这种行为?我猜这是与线程和进程分支有关的某种竞争条件,但从我的专业水平来看,它也可能是我的 Python 解释器出问题了。

最佳答案

这是因为您使用的是 Windows。在 Windows 上,每个子进程都是使用 'spawn' method 生成的它实际上启动了一个新的 python 解释器并导入了您的模块,而不是 fork 该过程。

这是个问题,因为if __name__ == '__main__' 之外的所有代码都再次执行。这可能导致 multiprocessing bomb如果您将多处理代码放在顶层,因为它会开始生成进程,直到您用完内存。

这实际上是warned about in the docs

Safe importing of main module

Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).

...

Instead one should protect the “entry point” of the program by using if __name__ == '__main__'

...

This allows the newly spawned Python interpreter to safely import the module...

该部分过去在 Python 2 的旧文档中被称为“Windows”。

关于Python:多处理的非常奇怪的行为;后面的代码导致前面代码的 "retroactive"减速,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51697937/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com