gpt4 book ai didi

Docker compose 不会将数据保存在卷中

转载 作者:行者123 更新时间:2023-12-02 17:59:28 24 4
gpt4 key购买 nike

我正在尝试使用 docker-compose 运行 Kafka。我得到了这个 yml 文件:

version: '3'


services:
zookeeper:
image: ${REPOSITORY}/cp-zookeeper:${TAG}
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- ./zoo:/var/lib/zookeeper

broker:
image: ${REPOSITORY}/cp-kafka:${TAG}
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
volumes:
- ./broker:/var/lib/kafka

我在带有 docker-compose.yml 文件的目录中运行了一个命令:
docker-compose up -d

之后的文件夹 ./broker./zoo出现在我的目录中。它们内部具有类似于容器内部的结构( ./zoo/data./broker/data )。但是目录中没有文件。

我试过
docker-compose exec broker ls /var/lib/kafka/data

我看到了关于默认主题的文件夹和文件

最佳答案

这归结为 volumes 之间的交互。 (如 Dockerfile 中声明的那样),以及作为 Docker Compose 一部分的您想要堆积的体积.

如果你检查每个容器的 Dockerfile,你会看到它声明了卷,你也可以通过检查它看到。以下是使用您的配置时的样子:

➜ docker inspect zookeeper|jq '.[].Mounts[] | .Type ,.Destination'
"volume"
"/etc/zookeeper/secrets"
"bind"
"/var/lib/zookeeper"
"volume"
"/var/lib/zookeeper/log"
"volume"
"/var/lib/zookeeper/data"

您会注意到针对 ZK 的特定数据路径有两个卷(在镜像本身中声明,即来自 Dockerfile)
  • /var/lib/zookeeper/log
  • /var/lib/zookeeper/data

  • 此外,还有来自 Docker Compose 的绑定(bind)挂载:
  • /var/lib/zookeeper/

  • 这些冲突,解释了你所看到的问题。

    经纪人也存在类似的模式。

    所以简而言之,您需要为镜像中的每个特定卷挂载一个本地主机目录:

    ---
    version: '3'


    services:
    zookeeper:
    image: confluentinc/cp-zookeeper:5.4.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
    - "2181:2181"
    environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000
    volumes:
    - ./zoo/data:/var/lib/zookeeper/data
    - ./zoo/log:/var/lib/zookeeper/log

    broker:
    image: confluentinc/cp-kafka:5.4.1
    hostname: broker
    container_name: broker
    depends_on:
    - zookeeper
    ports:
    - "9092:9092"
    environment:
    KAFKA_BROKER_ID: 1
    KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
    KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
    KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
    volumes:
    - ./broker/data:/var/lib/kafka/data

    完成后,我们可以看到容器路径中没有冲突:
    ➜ docker inspect zookeeper|jq '.[].Mounts '
    [
    {
    "Type": "bind",
    "Source": "/private/tmp/zoo/log",
    "Destination": "/var/lib/zookeeper/log",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
    },
    {
    "Type": "bind",
    "Source": "/private/tmp/zoo/data",
    "Destination": "/var/lib/zookeeper/data",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
    },
    {
    "Type": "volume",
    "Name": "6cbb584e0d9aa2f119869b264544f587909d9f417fc553a7bb2954dd28ecb8ea",
    "Source": "/var/lib/docker/volumes/6cbb584e0d9aa2f119869b264544f587909d9f417fc553a7bb2954dd28ecb8ea/_data",
    "Destination": "/etc/zookeeper/secrets",
    "Driver": "local",
    "Mode": "",
    "RW": true,
    "Propagation": ""
    }
    ]

    和来自容器的数据:
    ➜ docker exec zookeeper ls -l /var/lib/zookeeper/data /var/lib/zookeeper/log
    /var/lib/zookeeper/data:
    total 0
    drwxr-xr-x 3 root root 96 Apr 3 08:59 version-2

    /var/lib/zookeeper/log:
    total 0
    drwxr-xr-x 3 root root 96 Apr 3 08:59 version-2

    ➜ docker exec broker ls -l /var/lib/kafka/data
    total 16
    drwxr-xr-x 6 root root 192 Apr 3 08:59 __confluent.support.metrics-0
    -rw-r--r-- 1 root root 0 Apr 3 08:59 cleaner-offset-checkpoint
    -rw-r--r-- 1 root root 4 Apr 3 09:01 log-start-offset-checkpoint
    -rw-r--r-- 1 root root 88 Apr 3 08:59 meta.properties
    -rw-r--r-- 1 root root 36 Apr 3 09:01 recovery-point-offset-checkpoint
    -rw-r--r-- 1 root root 36 Apr 3 09:02 replication-offset-checkpoint
    -rw-r--r-- 1 root root 0 Apr 3 08:30 wibble

    存储在本地主机上:
    ➜ ls -l broker/data zoo/data zoo/log
    broker/data:
    total 32
    drwxr-xr-x 6 rmoff wheel 192 3 Apr 09:59 __confluent.support.metrics-0
    -rw-r--r-- 1 rmoff wheel 0 3 Apr 09:59 cleaner-offset-checkpoint
    -rw-r--r-- 1 rmoff wheel 4 3 Apr 10:00 log-start-offset-checkpoint
    -rw-r--r-- 1 rmoff wheel 88 3 Apr 09:59 meta.properties
    -rw-r--r-- 1 rmoff wheel 36 3 Apr 10:00 recovery-point-offset-checkpoint
    -rw-r--r-- 1 rmoff wheel 36 3 Apr 10:01 replication-offset-checkpoint
    -rw-r--r-- 1 rmoff wheel 0 3 Apr 09:30 wibble

    zoo/data:
    total 0
    drwxr-xr-x 3 rmoff wheel 96 3 Apr 09:59 version-2

    zoo/log:
    total 0
    drwxr-xr-x 3 rmoff wheel 96 3 Apr 09:59 version-2

    另见 Data Volumes for Kafka and ZooKeeper

    关于Docker compose 不会将数据保存在卷中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61002881/

    24 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com