gpt4 book ai didi

Django RQ rqworker 无限期卡住

转载 作者:可可西里 更新时间:2023-11-01 11:24:02 26 4
gpt4 key购买 nike

本周,我的集成测试停止工作。我发现这是一个无限期停滞的 django-rq 工作。我的输出:

$: RQ worker 'rq:worker:47e0aaf280be.13' started, version 0.12.0
$: *** Listening on default...
$: Cleaning registries for queue: default
$: default: myapp.engine.rules.process_event(<myapp.engine.event.Event object at 0x7f34f1ce50f0>) (a1e66a46-1a9d-4f52-be6f-6f4529dd2480)

这就是它卡住的点。我要键盘打断

代码没有改变。可以肯定的是,我回到 master 分支,检查它,重新运行集成测试,但它们也失败了。

如何从 python 中的测试用例开始调试 redis 或 rq 以了解可能发生的情况?有没有办法通过python查看实际队列记录? Redis 队列只会在测试运行时存在,并且由于它已卡住,我可以从运行 Redis 服务的 Docker 容器中通过 redis-cli 查看 Redis 队列。

目前我使用的调试方法是:

from rq import Queue
from redis import Redis
from django_rq import get_worker
...
def test_motion_alarm(self):
motion_sensor_data = {"motion_detected": 1}
post_alarm(
self.live_server_url,
self.location,
self.sensor_device_id,
"ALARM_MOTIONDETECTED",
motion_sensor_data
)

redis_conn = Redis('my_queue')
q = Queue(connection=redis_conn)
print(len(q))
queued_job_ids = q.job_ids
queued_jobs = q.jobs
logger.debug('RQ info: \njob IDs: {}, \njobs: {}'.format(queued_job_ids, queued_jobs))
get_worker().work(burst=True)

time.sleep(1)

self.assertTrue(db.event_exists_at_location(
db.get_location_by_motion_detected(self.location_id),
"ALARM_MOTIONDETECTED"))

产生此调试输出:

$ DEBUG [myapi.tests.integration.test_rules:436] RQ info: 
job IDs: ['bef879c4-832d-431d-97e7-9eec9f4bf5d7']
jobs: [Job('bef879c4-832d-431d-97e7-9eec9f4bf5d7', enqueued_at=datetime.datetime(2018, 12, 6, 0, 10, 14, 829488))]
$ RQ worker 'rq:worker:54f6054e7aa5.7' started, version 0.12.0
$ *** Listening on default...
$ Cleaning registries for queue: default
$ default: myapi.engine.rules.process_event(<myapi.engine.event.Event object at 0x7fbf204e8c50>) (bef879c4-832d-431d-97e7-9eec9f4bf5d7)

在队列容器中,在队列上运行一个 monitor 进程,我每隔一段时间就会看到一批新的监视器响应:

1544110882.343826 [0 172.19.0.4:38905] "EXPIRE" "rq:worker:ac50518f1c5e.7" "35"
1544110882.344304 [0 172.19.0.4:38905] "HSET" "rq:worker:ac50518f1c5e.7" "last_heartbeat" "2018-12-06T15:41:22.344170Z"
1544110882.968846 [0 172.19.0.4:38910] "EXPIRE" "rq:worker:ac50518f1c5e.12" "35"
1544110882.969651 [0 172.19.0.4:38910] "HSET" "rq:worker:ac50518f1c5e.12" "last_heartbeat" "2018-12-06T15:41:22.969181Z"
1544110884.122917 [0 172.19.0.4:38919] "EXPIRE" "rq:worker:ac50518f1c5e.13" "35"
1544110884.124966 [0 172.19.0.4:38919] "HSET" "rq:worker:ac50518f1c5e.13" "last_heartbeat" "2018-12-06T15:41:24.124809Z"
1544110884.708910 [0 172.19.0.4:38925] "EXPIRE" "rq:worker:ac50518f1c5e.14" "35"
1544110884.710736 [0 172.19.0.4:38925] "HSET" "rq:worker:ac50518f1c5e.14" "last_heartbeat" "2018-12-06T15:41:24.710599Z"
1544110885.415111 [0 172.19.0.4:38930] "EXPIRE" "rq:worker:ac50518f1c5e.15" "35"
1544110885.417279 [0 172.19.0.4:38930] "HSET" "rq:worker:ac50518f1c5e.15" "last_heartbeat" "2018-12-06T15:41:25.417155Z"
1544110886.028965 [0 172.19.0.4:38935] "EXPIRE" "rq:worker:ac50518f1c5e.16" "35"
1544110886.030002 [0 172.19.0.4:38935] "HSET" "rq:worker:ac50518f1c5e.16" "last_heartbeat" "2018-12-06T15:41:26.029817Z"
1544110886.700132 [0 172.19.0.4:38940] "EXPIRE" "rq:worker:ac50518f1c5e.17" "35"
1544110886.701861 [0 172.19.0.4:38940] "HSET" "rq:worker:ac50518f1c5e.17" "last_heartbeat" "2018-12-06T15:41:26.701716Z"
1544110887.359702 [0 172.19.0.4:38945] "EXPIRE" "rq:worker:ac50518f1c5e.18" "35"
1544110887.361642 [0 172.19.0.4:38945] "HSET" "rq:worker:ac50518f1c5e.18" "last_heartbeat" "2018-12-06T15:41:27.361481Z"
1544110887.966641 [0 172.19.0.4:38950] "EXPIRE" "rq:worker:ac50518f1c5e.19" "35"
1544110887.967931 [0 172.19.0.4:38950] "HSET" "rq:worker:ac50518f1c5e.19" "last_heartbeat" "2018-12-06T15:41:27.967760Z"
1544110888.595785 [0 172.19.0.4:38955] "EXPIRE" "rq:worker:ac50518f1c5e.20" "35"
1544110888.596962 [0 172.19.0.4:38955] "HSET" "rq:worker:ac50518f1c5e.20" "last_heartbeat" "2018-12-06T15:41:28.596799Z"
1544110889.199269 [0 172.19.0.4:38960] "EXPIRE" "rq:worker:ac50518f1c5e.21" "35"
1544110889.200416 [0 172.19.0.4:38960] "HSET" "rq:worker:ac50518f1c5e.21" "last_heartbeat" "2018-12-06T15:41:29.200265Z"
1544110889.783128 [0 172.19.0.4:38965] "EXPIRE" "rq:worker:ac50518f1c5e.22" "35"
1544110889.785444 [0 172.19.0.4:38965] "HSET" "rq:worker:ac50518f1c5e.22" "last_heartbeat" "2018-12-06T15:41:29.785158Z"
1544110890.422338 [0 172.19.0.4:38970] "EXPIRE" "rq:worker:ac50518f1c5e.23" "35"
1544110890.423470 [0 172.19.0.4:38970] "HSET" "rq:worker:ac50518f1c5e.23" "last_heartbeat" "2018-12-06T15:41:30.423314Z"

而且,奇怪的是,也许是设计使然,每次我看到这些经过时,它们都会在 :30 或 :00 秒结束。

所以,我可以确定是的,队列中确实有这个项目,而且作业正在运行,那么为什么作业不是每次都启动并运行?

最佳答案

这似乎是最近报告的 rq_scheduler 库中的一个缺陷,报告如下:https://github.com/rq/rq-scheduler/issues/197

有一个 PR in the works for it .但是,我注意到我们允许我们的 redis 库增加到 3.0.0 而没有明确请求该版本,that最终是什么破坏了系统。

在构建脚本中,我将 Dockerfile 设置为执行:RUN pip install redis=="2.10.6",这暂时缓解了这个问题。

关于Django RQ rqworker 无限期卡住,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53640418/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com