gpt4 book ai didi

python - 恢复丢失的 multiprocessing.Queue 项目,当工作进程死亡时

转载 作者:塔克拉玛干 更新时间:2023-11-03 03:15:28 25 4
gpt4 key购买 nike

我的场景是这样的:

  • 如果 said 为空,我有工作人员将任务排入 multiprocessing.Queue() 队列。这是为了确保任务的执行遵循一定的优先级,而 multiprocessing.Queue() 不执行优先级。
  • 有许多 worker 从 mp.Queue 中弹出并做一些事情。有时 (<0.1%) 这些任务会失败并终止,而无法重新排队任务。
  • 我的任务通过中央数据库锁定并且只能运行一次(硬性要求)。为此,他们有特定的状态,他们可以从/转换到这些状态。

我当前的解决方案:让所有工作人员通过另一个队列回答哪些任务已完成,并引入必须完成任务的截止日期。如果达到截止日期,则重置任务并重新排队。这存在解决方案“软”的问题,即截止日期是任意的。

我正在寻找最简单的解决方案。是否有更简单或更严格的解决方案?

最佳答案

此解决方案使用三个队列来跟踪工作(模拟为 WORK_ID):

  • todo_q:任何要完成的工作(包括如果进程在运行中死亡则要重做的工作)
  • start_q:任何已经被进程启动的工作
  • finish_q:任何已经完成的工作

使用这种方法你不需要定时器。只要您分配一个进程标识符并跟踪分配,检查是否 Process.is_alive()。如果进程终止,则将该工作添加回待办事项队列。

在下面的代码中,我模拟了一个工作进程在 25% 的时间内死亡...

from multiprocessing import Process, Queue
from Queue import Empty
from random import choice as rndchoice
import time

def worker(id, todo_q, start_q, finish_q):
"""multiprocessing worker"""
msg = None
while (msg!='DONE'):
try:
msg = todo_q.get_nowait() # Poll non-blocking on todo_q
if (msg!='DONE'):
start_q.put((id, msg)) # Let the controller know work started
time.sleep(0.05)
if (rndchoice(range(3))==1):
# Die a fraction of the time before finishing
print "DEATH to worker %s who had task=%s" % (id, msg)
break
finish_q.put((id, msg)) # Acknowledge work finished
except Empty:
pass
return

if __name__ == '__main__':
NUM_WORKERS = 5
WORK_ID = set(['A','B','C','D','E']) # Work to be done, you will need to
# name work items so they are unique
WORK_DONE = set([]) # Work that has been done
ASSIGNMENTS = dict() # Who was assigned a task
workers = dict()
todo_q = Queue()
start_q = Queue()
finish_q = Queue()

print "Starting %s tasks" % len(WORK_ID)
# Add work
for work in WORK_ID:
todo_q.put(work)

# spawn workers
for ii in xrange(NUM_WORKERS):
p = Process(target=worker, args=(ii, todo_q, start_q, finish_q))
workers[ii] = p
p.start()

finished = False
while True:
try:
start_ack = start_q.get_nowait() # Poll for work started
## Check for race condition between start_ack and finished_ack
if not ASSIGNMENTS.get(start_ack[0], False):
ASSIGNMENTS[start_ack[0]] = start_ack # Track the assignment
print "ASSIGNED worker=%s task=%s" % (start_ack[0],
start_ack[1])
WORK_ID.remove(start_ack[1]) # Account for started tasks
else:
# Race condition. Never overwrite existing assignments
# Wait until the ASSIGNMENT is cleared
start_q.put(start_ack)
except Empty:
pass

try:
finished_ack = finish_q.get_nowait() # Poll for work finished
# Check for race condition between start_ack and finished_ack
if (ASSIGNMENTS[finished_ack[0]][1]==finished_ack[1]):
# Clean up after the finished task
print "REMOVED worker=%s task=%s" % (finished_ack[0],
finished_ack[1])
del ASSIGNMENTS[finished_ack[0]]
WORK_DONE.add(finished_ack[1])
else:
# Race condition. Never overwrite existing assignments
# It was received out of order... wait for the 'start_ack'
finish_q.put(finished_ack)
finished_ack = None
except Empty:
pass

# Look for any dead workers, and put their work back on the todo_q
if not finished:
for id, p in workers.items():
status = p.is_alive()
if not status:
print " WORKER %s FAILED!" % id
# Add to the work again...
todo_q.put(ASSIGNMENTS[id][1])
WORK_ID.add(ASSIGNMENTS[id][1])
del ASSIGNMENTS[id] # Worker is dead now
del workers[id]
ii += 1
print "Spawning worker number", ii
# Respawn a worker to replace the one that died
p = Process(target=worker, args=(ii, todo_q, start_q,
finish_q))
workers[ii] = p
p.start()
else:
for id, p in workers.items():
p.join()
del workers[id]
break

if (WORK_ID==set([])) and (ASSIGNMENTS.keys()==list()):
finished = True
[todo_q.put('DONE') for x in xrange(NUM_WORKERS)]
else:
pass
print "We finished %s tasks" % len(WORK_DONE)

在我的笔记本电脑上运行这个...

mpenning@mpenning-T61:~$ python queueack.py
Starting 5 tasks
ASSIGNED worker=2 task=C
ASSIGNED worker=0 task=A
ASSIGNED worker=4 task=B
ASSIGNED worker=3 task=E
ASSIGNED worker=1 task=D
DEATH to worker 4 who had task=B
DEATH to worker 3 who had task=E
WORKER 3 FAILED!
Spawning worker number 5
WORKER 4 FAILED!
Spawning worker number 6
REMOVED worker=2 task=C
REMOVED worker=0 task=A
REMOVED worker=1 task=D
ASSIGNED worker=0 task=B
ASSIGNED worker=2 task=E
REMOVED worker=2 task=E
DEATH to worker 0 who had task=B
WORKER 0 FAILED!
Spawning worker number 7
ASSIGNED worker=5 task=B
REMOVED worker=5 task=B
We finished 5 tasks
mpenning@mpenning-T61:~$

我以 25% 的死亡率对 10000 多个工作项进行了测试。

关于python - 恢复丢失的 multiprocessing.Queue 项目,当工作进程死亡时,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8532215/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com