gpt4 book ai didi

python - celery 在 docker 上不起作用

转载 作者:行者123 更新时间:2023-12-02 18:41:55 32 4
gpt4 key购买 nike

我在 docker 中使用 celery 时遇到问题。

我配置了两个 docker 容器,web_server 和 celery_worker。 celery_worker 包括 rabbitmq-server。 web_server 从 celery worker 调用任务。

我通过 vagrant 在 VM 中配置了相同的东西。它有效。但是 docker 说出如下错误消息。

 Traceback (most recent call last):
File "/web_server/test/test_v1_data_description.py", line 58, in test_create_description
headers=self.get_basic_header()

.........
.........

File "../task_runner/__init__.py", line 31, in run_describe_task
kwargs={})
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 349, in send_task
self.backend.on_task_call(P, task_id)
File "/usr/local/lib/python3.4/dist-packages/celery/backends/rpc.py", line 32, in on_task_call
maybe_declare(self.binding(producer.channel), retry=True)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 194, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 425, in __call__
value = self.__value__ = self.__contract__()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
nose.proxy.OSError: [Errno 111] Connection refused

这些是两个容器的 Dockerfile。

用于 web_server 的 Dockerfile。
 FROM ubuntu:14.04
MAINTAINER Jinho Yoo

# Update packages.
RUN apt-get clean
RUN apt-get update

# Create work folder.
RUN mkdir /web_server
WORKDIR /web_server

# Setup web server and celery.
ADD ./install_web_server_conf.sh ./install_web_server_conf.sh
RUN chmod +x ./install_web_server_conf.sh
RUN ./install_web_server_conf.sh

#Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*

# Run web server.
CMD ["python3","web_server.py"]

# Expose port.
EXPOSE 5000

celery_worker 的 Dockerfile。
FROM ubuntu:14.04
MAINTAINER Jinho Yoo

# Update packages.
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y wget build-essential ca-certificates-java

# Setup python environment.
ADD ./bootstrap/install_python_env.sh ./install_python_env.sh
RUN chmod +x ./install_python_env.sh
RUN ./install_python_env.sh

# Install Python libraries including celery.
RUN pip3 install -r ./core/requirements.txt

# Add mlcore user for Celery worker
RUN useradd --uid 1234 -M mlcore
RUN usermod -L mlcore

# Celery configuration for supervisor
ADD celeryd.conf /etc/supervisor/conf.d/celeryd.conf
RUN mkdir -p /var/log/celery

# Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*

# Run celery server by supervisor.
CMD ["supervisord", "-c", "/ml_core/supervisord.conf"]

# Expose port.
EXPOSE 8080
EXPOSE 8081
EXPOSE 4040
EXPOSE 7070
EXPOSE 5672
EXPOSE 5671
EXPOSE 15672

最佳答案

Docker 容器之间无法正常通信。我的猜测是您的连接字符串类似于 localhost:<port> ?

有几种方法可以让您的容器能够进行通信。

1:链接
http://rominirani.com/2015/07/31/docker-tutorial-series-part-8-linking-containers/

本质上,在运行时,docker 会在您的 hosts 文件中添加一个条目,该条目指向同一私有(private) docker 网络堆栈中 docker 容器的内部 IP 地址。

2:docker run --net=host :
这会将您的容器绑定(bind)到您的主机网络堆栈,因此,所有容器似乎都从 localhost 运行, 并且可以这样访问。如果您正在运行多个绑定(bind)到同一个外部端口的容器,您可能会遇到端口冲突问题,请注意这一点。

3:外部HAProxy:
您可以将 DNS 条目绑定(bind)到 HAProxy,并将代理配置为使用与 DNS 条目匹配的主机头重定向流量:您的容器正在运行的端口,来自其他容器的任何调用都将“离开”私有(private) docker 网络堆栈,点击 DNS 服务器,然后返回 HAProxy,它将指向正确的容器。

关于python - celery 在 docker 上不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35594810/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com