gpt4 book ai didi

python - Celeryd multi 与 supervisord

转载 作者:太空宇宙 更新时间:2023-11-03 12:20:44 27 4
gpt4 key购买 nike

尝试使用 celery multi 运行 supervisord (3.2.2)。

好像是supervisord处理不了。单个 celery worker 工作正常。

这是我的supervisord配置

celery multi v3.1.20 (Cipater)
> Starting nodes...
> celery1@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.
> celery2@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.

celeryd.conf

; ==================================
; celery worker supervisor example
; ==================================

[program:celery]
; Set full path to celery program if using virtualenv
command=/usr/local/src/imbue/application/imbue/supervisorctl/celeryd/celeryd.sh
process_name = %(program_name)s%(process_num)d@%(host_node_name)s
directory=/usr/local/src/imbue/application/imbue/conf/
numprocs=2
stderr_logfile=/usr/local/src/imbue/application/imbue/log/celeryd.err
logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log
stdout_logfile_backups = 10
stderr_logfile_backups = 10
stdout_logfile_maxbytes = 50MB
stderr_logfile_maxbytes = 50MB
autostart=true
autorestart=false
startsecs=10

我使用以下监督变量来模拟我启动 celery 的方式:

  • %(program_name)s
  • %(process_num)d
  • @
  • %(host_node_name)s

监督者

supervisorctl 
celery:celery1@parzee-dev-app-sfo1 FATAL Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1 FATAL Exited too quickly (process log may have details)

我尝试将/usr/local/lib/python2.7/dist-packages/supervisor/options.py 中的值从 0 更改为 1:

numprocs_start = integer(get(section, 'numprocs_start', 1))

我仍然得到:

celery:celery1@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1 EXITED May 14 12:47 AM

Celery 正在启动,但 supervisord 没有跟踪它。

root@parzee-dev-app-sfo1:/etc/supervisor#

ps -ef | grep celery
root 2728 1 1 00:46 ? 00:00:02 [celeryd: celery1@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery1@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/1.pid)
root 2973 1 1 00:46 ? 00:00:02 [celeryd: celery2@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery2@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/2.pid)

celery .sh

source ~/.profile
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log
CELERYD_OPTS=" --loglevel=DEBUG"
CELERY_WORKERS=2
CELERY_PROCESSES=16
cd /usr/local/src/imbue/application/imbue/conf
exec celery multi start $CELERY_WORKERS -P processes -c $CELERY_PROCESSES -n celeryd@{HOSTNAME} -f $CELERY_LOGFILE $CELERYD_OPTS

类似的: Running celeryd_multi with supervisor How to use Supervisor + Django + Celery with multiple Queues and Workers?

最佳答案

由于主管监控(启动/停止/重启)进程,因此进程应该在前台运行(不应被守护进程)。

Celery multi 自身是守护进程,所以它不能与 supervisor 一起运行。

您可以为每个工作人员创建单独的流程并将它们组合成一个。

[program:worker1]
command=celery worker -l info -n worker1

[program:worker2]
command=celery worker -l info -n worker2

[group:workers]
programs=worker1,worker2

你也可以写一个 shell 脚本 makes daemon process run in foreground像这样。

#! /usr/bin/env bash
set -eu

pidfile="/var/run/your-daemon.pid"
command=/usr/sbin/your-daemon

# Proxy signals
function kill_app(){
kill $(cat $pidfile)
exit 0 # exit okay
}
trap "kill_app" SIGINT SIGTERM

# Launch daemon
$ celery multi start 2 -l INFO

sleep 2

# Loop while the pidfile and the process exist
while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do
sleep 0.5
done
exit 1000 # exit unexpected

关于python - Celeryd multi 与 supervisord,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37222857/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com