gpt4 book ai didi

asynchronous - 如何解决 celery.backends.rpc.BacklogLimitExceeded 错误

转载 作者:行者123 更新时间:2023-12-01 18:13:05 27 4
gpt4 key购买 nike

在工作了很长一段时间后,我将 Celery 与 Flask 一起使用,我的 celery 显示了 celery.backends.rpc.BacklogLimitExceeded 错误。我的配置值如下:

CELERY_BROKER_URL = 'amqp://'
CELERY_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = False

谁能解释一下为什么会出现这个错误以及如何解决它?我已经检查了文档here这没有为该问题提供任何解决方案。

最佳答案

可能是因为您消耗结果的过程跟不上产生结果的过程?这可能会导致大量未处理的结果堆积 - 这就是“积压”。当积压的大小超过任意限制时,celery 会引发 BacklogLimitExceeded

您可以尝试添加更多消费者来处理结果吗?或者为 the result_expires setting 设置更短的值?

关于this closed celery issue的讨论可能有帮助:

Seems like the database backends would be a much better fit for this purpose. The amqp/RPC result backends needs to send one message per state update, while for the database based backends (redis, sqla, django, mongodb, cache, etc) every new state update will overwrite the old one.

The "amqp" result backend is not recommended at all since it creates one queue per task, which is required to mimic the database based backends where multiple processes can retrieve the result.

The RPC result backend is preferred for RPC-style calls where only the process that initiated the task can retrieve the result.

But if you want persistent multi-consumer result you should store them in a database.

Using rabbitmq as a broker and redis for results is a great combination, but using an SQL database for results works well too.

关于asynchronous - 如何解决 celery.backends.rpc.BacklogLimitExceeded 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57261296/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com