gpt4 book ai didi

python - Flask+Celery 作为守护进程

转载 作者:行者123 更新时间:2023-12-03 17:13:31 25 4
gpt4 key购买 nike

我尝试使用 Python Flask 并想使用 celery。分布式任务工作正常,但现在我想按照 celery 文档中的说明将其配置为守护进程。但是我得到了 celery_worker_1 exited with code 0 错误。

项目结构:

celery
|-- flask-app
| `-- app.py
|-- worker
| |-- celeryd
| |-- celeryd.conf
| |-- Dockerfile
| |-- start.sh
| `-- tasks.py
`-- docker-compose.yml

Flask-app/app.py:

from flask import Flask
from flask_restful import Api, Resource

from celery import Celery

celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)

app = Flask(__name__)
api = Api(app)

class add_zahl(Resource):
def get(self):
zahl = 54
task = celery.send_task('mytasks.add', args=[zahl])

return {'message': f"Prozess {task.id} gestartet, input {zahl}"}, 200

api.add_resource(add_zahl, "/add")

if __name__ == '__main__':
app.run(host="0.0.0.0", debug=True)

worker :任务.py

from celery import Celery
import requests
import time
import os
from dotenv import load_dotenv

basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))

celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)

@celery.task(name='mytasks.add')
def send_simple_message(zahl):
time.sleep(5)
result = zahl * zahl
return result

if __name__ == '__main__':
celery.start()

docker 文件:

FROM python:3.6-slim

RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt

COPY . /worker/

COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd

COPY celeryd.conf /etc/default/celeryd
RUN chown root:root /etc/default/celeryd

RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery

RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery

RUN chmod u+x /worker/start.sh
ENTRYPOINT /worker/start.sh

celeryd.conf:

CELERYD_NODES="worker1"
CELERY_BIN="/worker/tasks"
CELERY_APP="worker.tasks:celery"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1

开始.sh

#!/bin/sh
exec celery multi start worker1 -A worker --app=worker.tasks:celery

celery : https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd

Docker 检查日志:

Docker inspect 50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765

[
{
"Id": "50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765",
"Created": "2019-02-21T23:20:15.017156266Z",
"Path": "/bin/sh",
"Args": [
"-c",
"/worker/start.sh"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-02-21T23:20:40.375566345Z",
"FinishedAt": "2019-02-21T23:20:41.162618701Z"
},

抱歉出现“垃圾邮件”,但我无法解决此问题。

编辑 编辑 编辑

我添加了提到的 CMD 行,现在 worker 没有启动。我正在努力为此寻找解决方案。有什么提示吗?谢谢大家。

FROM python:3.6-slim

RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt

COPY . /worker/

COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd

COPY celeryd.conf /etc/default/celeryd
RUN chown -R root:root /etc/default/celeryd

RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery

RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery

CMD ["celery", "worker", "--app=worker.tasks:celery"]

最佳答案

每当 Docker 容器的入口点退出(或者,如果您没有入口点,它的主要命令),容器就会退出。这样做的必然结果是,容器中的主进程不能是像 celery multi 这样产生一些后台工作并立即返回的命令;您需要使用在前台运行的命令,例如 celery worker

我可能会将您的 Dockerfile 中的最后几行替换为:

CMD ["celery", "worker", "--app=worker.tasks:celery"]

保留入口点脚本并将其更改为等效的前台 celery worker 命令也应该可以完成这项工作。

关于python - Flask+Celery 作为守护进程,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54818082/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com