gpt4 book ai didi

python - 即使存在死锁,如何强制关闭 ProcessPoolExecutor

转载 作者:行者123 更新时间:2023-12-05 04:32:55 29 4
gpt4 key购买 nike

我正在尝试使用一个单独的进程通过并发 future 流式传输数据。然而,另一方面,有时对方会停止数据馈送。但只要我重新启动这个 threadable 然后它就会再次工作。所以我设计了这样的东西,以便能够在没有干预的情况下保持流数据。

executor = concurrent.futures.ProcessPoolExecutor()
job2 = executor.submit(threadable,list_tmp_replace)
time.sleep(3600)
executor_tmp = executor
executor = concurrent.futures.ProcessPoolExecutor(1)
job2 = executor.submit(threadable, list_tmp_replace_2)
time.sleep(20). #warm up the new process
executor_tmp.shutdown() #to avoid infinite number of pools in the long run, also threadable itself involves writing data to database. best to avoid duplicate tasks.

但是,我得到了这个错误

File "/home/ubuntu/anaconda3/lib/python3.8/asyncio/tasks.py", line 280, in __step
result = coro.send(None)
File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/cryptofeed/backends/postgres.py", line 61, in writer
await self.write_batch(updates)
File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/cryptofeed/backends/postgres.py", line 75, in write_batch
await self.conn.execute(f"INSERT INTO {self.table} VALUES {args_str}")
File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/asyncpg/connection.py", line 315, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 338, in query
File "/home/ubuntu/anaconda3/lib/python3.8/asyncio/futures.py", line 260, in __await__
yield self # This tells Task to wait for completion.
File "/home/ubuntu/anaconda3/lib/python3.8/asyncio/tasks.py", line 349, in __wakeup
future.result()
File "/home/ubuntu/anaconda3/lib/python3.8/asyncio/futures.py", line 178, in result
raise self._exception
asyncpg.exceptions.DeadlockDetectedError: deadlock detected
DETAIL: Process 2576028 waits for ShareLock on transaction 159343645; blocked by process 2545736.
Process 2545736 waits for ShareLock on transaction 159343644; blocked by process 2576028.
HINT: See server log for query details.

之前,我手动关闭 Python 程序 (ctrl C) 并从终端(使用屏幕)重新启动它。但我希望这样的过程是自动的,由代码本身控制以自动重新连接到数据馈送。无论如何,我是否可以在同一个 python 程序中强制关闭死锁?

最佳答案

您的代码似乎表明有两个 threadable 实例是可以的同时运行,至少在某些重叠期间,并且您无条件地想要运行 threadable 的新实例3600 秒过后。这就是我所能继续的,基于此我唯一的建议是您可以考虑切换到使用 multiprocessing.pool.Pool类作为多处理池,它的优点是 (1) 它与您一直使用的类不同,因为没有其他原因可能会产生不同的结果,并且 (2) 与 ProcessPoolExecutor.shutdown 不同方法,Pool.terminate方法实际上会立即终止正在运行的作业(ProcessPoolExecutor.shutdown 方法将等待已经开始的作业(即待定的 future )完成,即使您指定了 shutdown(wait=False) ,但您没有指定)。

利用multiprocessing.pool.Pool 的等效代码会是:

from multiprocessing import Pool
...

# Only need a pool size of 1:
pool = Pool(1)
job2 = pool.apply_async(threadable, args=(list_tmp_replace,))
time.sleep(3600)
pool_tmp = pool
pool = Pool(1)
job2 = pool.apply_async(threadable, args=(list_tmp_replace_2,))
time.sleep(20) #warm up the new process
pool_tmp.terminate()
pool_tmp.join()

但是为什么还要使用池来运行单个进程呢?考虑使用 multiprocessing.Process实例:

from multiprocessing import Process
...

# Only need a pool size of 1:
job2 = Process(targeet=threadable, args=(list_tmp_replace,))
job2.start()
time.sleep(3600)
job2_tmp = job2
job2 = Process(targeet=threadable, args=(list_tmp_replace_2,))
job2.start()
time.sleep(20) #warm up the new process
job2_tmp.terminate()

关于python - 即使存在死锁,如何强制关闭 ProcessPoolExecutor,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71566336/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com