gpt4 book ai didi

docker - 使用预定义的 Redis 转储创建 Docker 容器

转载 作者:可可西里 更新时间:2023-11-01 11:12:22 24 4
gpt4 key购买 nike

我尝试用数据创建 Redis docker 容器。我的方法是受这个问题的启发。但由于某种原因它不起作用。

这是我的 Dockerfile:

FROM redis

EXPOSE 6379

COPY redis-dump.csv /

RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s \
&& cat /redis-dump.csv | redis-cli --pipe \
&& redis-cli shutdown save
&& ls /data

和 docker-compose.yml:

version: '3.3'

volumes:
redisdata:

services:
redis:
build:
context: docker/redis
volumes:
- redisdata:/data
ports:
- "6379:6379"

当我创建容器时,Redis 是空的。当我连接到容器目录时,/data 也是空的。但是当我在 docker 创建时看到日志时,有 dump.rdbappendonly.aof 文件。转储文件在容器中。当我运行 cat/redis-dump.csv | redis-cli --pipe 在容器中,然后数据在 Redis 中可用。那么,问题是为什么没有 db 文件?

这是创建容器的完整日志:

Creating network "restapi_default" with the default driver
Creating volume "restapi_redisdata" with default driver
Building redis
Step 1/4 : FROM redis
---> a55fbf438dfd
Step 2/4 : EXPOSE 6379
---> Using cache
---> 2e6e5609b5b3
Step 3/4 : COPY redis-dump.csv /
---> Using cache
---> 39330e43e72a
Step 4/4 : RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s && cat /redis-dump.csv | redis-cli --pipe && redis-cli shutdown save && ls /data
---> Running in 7e290e6a46ce
7:C 10 May 2019 19:45:32.509 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7:C 10 May 2019 19:45:32.509 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=7, just started
7:C 10 May 2019 19:45:32.509 # Configuration loaded
7:M 10 May 2019 19:45:32.510 * Running mode=standalone, port=6379.
7:M 10 May 2019 19:45:32.510 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
7:M 10 May 2019 19:45:32.510 # Server initialized
7:M 10 May 2019 19:45:32.510 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7:M 10 May 2019 19:45:32.510 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
7:M 10 May 2019 19:45:32.511 * Ready to accept connections
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 67600
7:M 10 May 2019 19:45:37.750 # User requested shutdown...
7:M 10 May 2019 19:45:37.750 * Calling fsync() on the AOF file.
7:M 10 May 2019 19:45:37.920 * Saving the final RDB snapshot before exiting.
7:M 10 May 2019 19:45:37.987 * DB saved on disk
7:M 10 May 2019 19:45:37.987 # Redis is now ready to exit, bye bye...
appendonly.aof
dump.rdb
Removing intermediate container 7e290e6a46ce
---> 1f1cd024e68f

Successfully built 1f1cd024e68f
Successfully tagged restapi_redis:latest
Creating restapi_redis_1 ... done

这是数据示例:

SET user:id:35 85.214.132.117
SET user:id:66 85.214.132.117
SET user:id:28 85.214.132.117
SET user:id:40 85.214.132.117
SET user:id:17 85.214.132.117
SET user:id:63 85.214.132.117
SET user:id:67 85.214.132.117
SET user:id:45 85.214.132.117
SET user:id:23 85.214.132.117
SET user:id:79 85.214.132.117
SET user:id:26 85.214.132.117
SET user:id:94 85.214.132.117

最佳答案

您必须在启动容器之前删除卷:

docker volume rm redisdata

然后将您的 Dockerfile 更改为以下内容:

FROM redis

EXPOSE 6379

COPY redis-dump.csv /

ENTRYPOINT nohup bash -c "redis-server --appendonly yes" & sleep 5s \
&& cat /redis-dump.csv | redis-cli --pipe \
&& redis-cli save \
&& redis-cli shutdown \
&& ls /data

为了更快获得结果,我建议将卷映射到本地文件夹:

version: '3.3'

services:
redis:
build:
context: .
volumes:
- ./redisdata:/data
ports:
- "6379:6379"

在您看到它运行后,您可以切换回正常的 docker 卷。

现在运行:

docker-compose build
docker-compose up -d

容器将启动并正常停止,因为没有进程在运行。但数据将存在于数据文件夹中。

通常,在使用数据库时,填充应该在运行的容器而不是图像上完成。

经过讨论,我们决定使用多阶段构建:

FROM redis as import 

EXPOSE 6379

COPY redis-dump.csv /

RUN mkdir /mydata

RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s \
&& cat /redis-dump.csv | redis-cli --pipe \
&& redis-cli save \
&& redis-cli shutdown \
&& cp /data/* /mydata/

RUN ls /mydata

FROM redis

COPY --from=import /mydata /data
COPY --from=import /mydata /mydata

RUN ls /data

CMD ["redis-server", "--appendonly", "yes"]

第一阶段(导入)与原始发布的几乎相同。由于我们注意到在最后一个 RUN 命令之后/data 中的文件被删除,我们在另一个名为/mydata 的文件夹中制作了一个副本。

第二阶段使用与基础相同的图像,但它仅从前一阶段复制它需要的内容:/mydata 中的数据。它将此数据放在/data 文件夹中,然后启动 redis 服务器。

关于docker - 使用预定义的 Redis 转储创建 Docker 容器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56084064/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com