gpt4 book ai didi

docker - 在撰写文件版本3中挂载Windows主机目录

转载 作者:行者123 更新时间:2023-12-02 19:37:32 25 4
gpt4 key购买 nike

我试图将docker-compose.yml从版本1升级到版本3。

有关的主要问题

volumes_from: To share a volume between services, 
define it using the top-level volumes option and
reference it from each service that shares it using the
service-level volumes option.

最简单的例子:

版本“1”
data:
image: postgres:latest
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"

如果我正确理解,应转换为
version: "3"

services:
db:
image: postgres:latest
restart: always
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- appn

networks:
appn:

volumes:
db-data:?

问题:现在如何在 顶级卷选项中如何将 的相对路径设置为 文件夹中的文件夹“example_folder”从托管到到 “db-data”

最佳答案

在这种情况下,您可能会考虑不使用volumes_from

docker 1.13 issue在此Sebastiaan van Stijn (thaJeztah)中所述:

The volumes_from is basically a "lazy" way to copy volume definitions from one container to another, so;

docker run -d --name one -v myvolume:/foo image-one

docker run -d --volumes-from=one image-two

Is the same as running;

docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two

If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;

version: "3.0"

services:
db:
image: nginx
volumes:
- uploads-data:/usr/share/nginx/html/uploads/

volumes:
uploads-data:

Which you can run with docker-compose;

docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1


基本上,它在docker compose版本3中不可用:

There's a couple of reasons volumes_from is not ported to the compose-file "3";

  • In a swarm, there is no guarantee that the "from" container is running on the same node. Using volumes_from would not lead to the expected result.
    This is especially the case with bind-mounts, which, in a swarm, have to exist on the host (are not automatically created)
  • There is still a "race" condition (as described earlier)
  • The "data" container has to use exactly the right paths for volumes as the "app" container that uses the volumes (i.e. if the "app" uses the volume in /some/path/in/container, then the data container also has to have the volume at /some/path/in/container). There are many cases where the volume may be shared by multiple services, and those may be consuming the volume in different paths.


而且,如 issue 19990中所述:

The "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.

For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.

Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.



对于您的问题,您需要定义一个Docker卷容器并在其中复制主机内容:
services:
data:
image: "nginx:alpine"
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

关于docker - 在撰写文件版本3中挂载Windows主机目录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42957815/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com