gpt4 book ai didi

python - 如何在Python中管理任务队列并在多台计算机上并行运行这些任务?

转载 作者:太空宇宙 更新时间:2023-11-03 20:58:38 24 4
gpt4 key购买 nike

我正在寻找一个Python库,它允许:管理任务队列,并行运行任务(在一台或多台计算机上),允许一个任务可以在队列中生成其他任务,并且与 UNIX 和 Windows 兼容。

我读了一些关于任务管理器部分的 Celery、RQ、SCoOP、多处理以及消息代理部分的 redis、rabbitMQ 和 ZMQ 的文档,但我真的不知道什么是最好的选择。

最佳答案

考虑Python multiprocessing library

这允许许多多处理选项,例如使用工作队列将多个进程作为工作池运行。它在一台服务器上运行,但您可以实现一个在另一台服务器上执行工作的连接器(例如通过 SSH 并远程运行 python 可执行文件)。

否则我不知道有一个可以跨服务器和跨平台工作的Python库。您可能需要一个容器化应用程序 - 例如 Kubernetes。

下面是我编写的一些示例代码,它将“任务 ID”添加到代表可运行任务的队列中。然后,这些可以由工作池并行执行。

import time
from multiprocessing import Queue, Pool, Process
from Queue import Empty

# For writing to logs when using multiprocessing
import logging
from multiprocessing_logging import install_mp_handler()


class RuntimeHelper:
"""
Wrapper to your "runtime" which can execute runs and is persistant within a worker thread.
"""
def __init__(self):
# Implement your own code here
# Do some initialisation such as creating DB connections etc
# Will be done once per worker when the worker starts
pass

def execute_run(self, run_id):
# Implement your own code here to actually do the Run/Task.
# In this case we just sleep for 30 secs instead of doing any real work
time.sleep(30)
pass


def worker(run_id_queue):
"""
This function will be executed once by a Pool of Processes using multiprocessing.Pool
:param queue: The thread-safe Queue of run_ids to use
:return:
"""
helper = RuntimeHelper()
# Iterate runs until death
logging.info("Starting")
while True:
try:
run_id = run_id_queue.get_nowait()
# A run_id=None is a signal to this process to die
# An empty queue means: dont die, the queue is just empty for now and more work could be added soon
if run_id is not None:
logging.info("run_id={0}".format(run_id))
helper.execute_run(run_id)
else:
logging.info("Kill signal received")
return True
except Empty:
# Wait X seconds before checking for new work
time.sleep(15)


if __name__ == '__main__':
num_processes = 10
check_interval_seconds = 15
max_runtime_seconds = 60*15

# ==========================================
# INITIALISATION
# ==========================================
install_mp_handler() # Must be called before Pool is create

queue = Queue()
pool = Pool(num_processes, worker, (queue,))
# don't forget the coma here ^

# ==========================================
# LOOP
# ==========================================

logging.info('Starting to do work')

# Naive wait-loop implementation
max_iterations = max_runtime_seconds / check_interval_seconds
for i in range(max_iterations):
# Add work
ready_runs = <Your code to get some runs>
for ready_run in ready_runs:
queue.put(ready_run.id)
# Sleep while some of the runs are busy
logging.info('Main thread sleeping {0} of {1}'.format(i, max_iterations))
time.sleep(check_interval_seconds)

# Empty the queue of work and send the kill signal (run_id = None)
logging.info('Finishing up')
while True:
try:
run_id = queue.get_nowait()
except Empty:
break
for i in range(num_processes):
queue.put(None)
logging.info('Waiting for subprocesses')

# Wait for the pool finish what it is busy with
pool.close()
pool.join()
logging.info('Done')

关于python - 如何在Python中管理任务队列并在多台计算机上并行运行这些任务?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55845182/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com