gpt4 book ai didi

docker - 在Docker上构建kubernetes

转载 作者:行者123 更新时间:2023-12-02 20:47:13 25 4
gpt4 key购买 nike

操作系统:CentOS 7
docker版本1.13.1

我正在尝试在centos上安装kubernetes以在内部运行。我使用docker上的构建来构建它,因为go的构建不起作用。关于依赖项和细节的文档非常差。

我按照kubernetes网站上的指示进行操作:https://github.com/kubernetes/kubernetes

[kubernetes]$ git clone https://github.com/kubernetes/kubernetes
[kubernetes]$ cd kubernetes
[kubernetes]$ make quick-release
+++ [0521 22:31:10] Verifying Prerequisites....
+++ [0521 22:31:17] Building Docker image kube-build:build-e7afc7a916-5-v1.10.2-1
+++ [0521 22:33:45] Creating data container kube-build-data-e7afc7a916-5-v1.10.2-1
+++ [0521 22:34:57] Syncing sources to container
+++ [0521 22:35:15] Running build command...
+++ [0521 22:36:02] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0521 22:36:14] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0521 22:36:21] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0521 22:36:31] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/openapi-gen
+++ [0521 22:36:40] Building go targets for linux/amd64:
./vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0521 22:36:42] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/cloud-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/hyperkube
cmd/kube-scheduler
vendor/k8s.io/kube-aggregator
vendor/k8s.io/apiextensions-apiserver
cluster/gce/gci/mounter
+++ [0521 22:40:24] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kubeadm
cmd/kubelet
+++ [0521 22:41:08] Building go targets for linux/amd64:
cmd/kubectl
+++ [0521 22:41:31] Building go targets for linux/amd64:
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0521 22:44:24] Building go targets for linux/amd64:
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e_node/e2e_node.test
+++ [0521 22:45:24] Syncing out of container
+++ [0521 22:46:39] Building tarball: src
+++ [0521 22:46:39] Building tarball: manifests
+++ [0521 22:46:39] Starting tarball: client darwin-386
+++ [0521 22:46:39] Starting tarball: client darwin-amd64
+++ [0521 22:46:39] Starting tarball: client linux-386
+++ [0521 22:46:39] Starting tarball: client linux-amd64
+++ [0521 22:46:39] Starting tarball: client linux-arm
+++ [0521 22:46:39] Starting tarball: client linux-arm64
+++ [0521 22:46:39] Starting tarball: client linux-ppc64le
+++ [0521 22:46:39] Starting tarball: client linux-s390x
+++ [0521 22:46:39] Starting tarball: client windows-386
+++ [0521 22:46:39] Starting tarball: client windows-amd64
+++ [0521 22:46:39] Waiting on tarballs
+++ [0521 22:47:19] Building tarball: server linux-amd64
+++ [0521 22:47:19] Building tarball: node linux-amd64
+++ [0521 22:47:47] Starting docker build for image: cloud-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-apiserver-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-scheduler-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-aggregator-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-proxy-amd64
+++ [0521 22:47:47] Building hyperkube image for arch: amd64
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-scheduler:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-aggregator:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:41] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:43] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:46] Deleting docker image k8s.gcr.io/kube-apiserver:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:58] Deleting docker image k8s.gcr.io/kube-proxy:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Deleting hyperkube image k8s.gcr.io/hyperkube-amd64:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Docker builds done
+++ [0521 22:50:54] Building tarball: final
+++ [0521 22:50:54] Building tarball: test
  • 我的第一个问题是,为什么在构建结束时docker会删除kube-apiserver,kube-proxy等。这些是我期望使用的工具。
  • 第二个问题,为什么我现在只有一个“kube-build”图像。我如何与此互动?我期待看到除了kube构建之外的kubeadm和kubectl。
    该文档没有其他说明下一步要做的事情。如何创建Pod,部署容器并进行管理。我原本希望在kubectl / kubeadm镜像上使用docker attach来执行此操作,但是没有。
    $ docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    kube-build build-e7afc7a916-5-v1.10.2-1 8d27a8ba87fd About an hour ago 2.58 GB
    docker.io/node latest f697cb5f31f8 12 days ago 675 MB
    docker.io/redis latest bfcb1f6df2db 2 weeks ago 107 MB
    docker.io/mongo latest 14c497d5c758 3 weeks ago 366 MB
    docker.io/nginx latest ae513a47849c 3 weeks ago 109 MB

  • 那么有人应该如何处理“kube-build”图像。任何帮助都会很棒。谢谢!

    另外,我尝试标记此“kube-build”,因为这是确切的图像名称,但是我没有足够的声誉来进行新标记。

    最佳答案

    首先,build的结果位于_output文件夹中:

    [@_output]# ls
    dockerized images release-images release-stage release-tars

    release-images\$your_architecture文件夹中,您可以在压缩包中找到图像:
    [@release-images]# cd amd64/
    [@amd64]# ls
    cloud-controller-manager.tar hyperkube-amd64.tar kube-aggregator.tar kube-apiserver.tar kube-controller-manager.tar kube-proxy.tar kube-scheduler.tar

    您可以使用以下命令将它们导入本地docker repo:
    cat kube-apiserver.tar | docker import - kube-api:new

    您将在本地docker镜像存储库中找到以下结果:
    [@amd64]# docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    kube-api new 4bd734072676 7 minutes ago 183MB

    您还可以在 release-tars文件夹中找到带有二进制文件的tarball。

    通常,Kubernetes在一台服务器上构建,然后在另一台服务器上使用,这就是为什么要在文件夹 _output中包含构建结果的原因。

    关于docker - 在Docker上构建kubernetes,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50458027/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com