gpt4 book ai didi

python - 异步: prevent task from being cancelled twice

转载 作者:行者123 更新时间:2023-12-01 03:40:40 25 4
gpt4 key购买 nike

有时,我的协程清理代码包含一些阻塞部分(在asyncio意义上,即它们可能会产生)。

我尝试仔细设计它们,这样它们就不会无限期地阻塞。因此,“根据契约(Contract)”,协程一旦进入其清理片段,就绝不能被中断。

不幸的是,我找不到一种方法来防止这种情况,并且当它发生时就会发生不好的事情(无论是由实际的双重 cancel 调用引起的;还是当它几乎自行完成时,进行清理) ,并且碰巧从其他地方取消了)。

理论上,我可以将清理工作委托(delegate)给其他函数,用shield保护它,并用try- except循环包围它,但它只是丑陋。

有Python式的方法吗?

#!/usr/bin/env python3

import asyncio

@asyncio.coroutine
def foo():
"""
This is the function in question,
with blocking cleanup fragment.
"""

try:
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during work")
raise
finally:
print("I need just a couple more seconds to cleanup!")
try:
# upload results to the database, whatever
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during cleanup :(")
else:
print("All cleaned up!")

@asyncio.coroutine
def interrupt_during_work():
# this is a good example, all cleanup
# finishes successfully

t = asyncio.async(foo())

try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"

t.cancel()

# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass

@asyncio.coroutine
def interrupt_during_cleanup():
# here, cleanup is interrupted

t = asyncio.async(foo())

try:
yield from asyncio.wait_for(t, 1.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"

t.cancel()

# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass

@asyncio.coroutine
def double_cancel():
# cleanup is interrupted here as well
t = asyncio.async(foo())

try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"

t.cancel()

try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"

# although double cancel is easy to avoid in
# this particular example, it might not be so obvious
# in more complex code
t.cancel()

# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass

@asyncio.coroutine
def comain():
print("1. Interrupt during work")
yield from interrupt_during_work()

print("2. Interrupt during cleanup")
yield from interrupt_during_cleanup()

print("3. Double cancel")
yield from double_cancel()

def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
loop.run_until_complete(task)

if __name__ == "__main__":
main()

最佳答案

我最终编写了一个简单的函数,可以说,它提供了更强大的屏蔽。

与保护被调用者但在调用者中引发 CancelledErrorasyncio.shield 不同,此函数完全抑制 CancelledError

缺点是该函数不允许您稍后处理CancelledError。你不会看到它是否曾经发生过。需要稍微复杂一些的东西才能做到这一点。

@asyncio.coroutine
def super_shield(arg, *, loop=None):
arg = asyncio.async(arg)
while True:
try:
return (yield from asyncio.shield(arg, loop=loop))
except asyncio.CancelledError:
continue

关于python - 异步: prevent task from being cancelled twice,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39688070/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com