gpt4 book ai didi

kubernetes - kubeadm init 更改 imageRepository

转载 作者:行者123 更新时间:2023-12-02 00:58:00 24 4
gpt4 key购买 nike

我正在尝试启动一个 kubernetes 集群,但是 kubernetes 使用不同的 url 来提取它的图像。 AFAIK,只能通过配置文件。

我不熟悉 配置文件 ,所以我从一个简单的开始:

apiVersion: kubeadm.k8s.io/v1alpha2
imageRepository: my.internal.repo:8082
kind: MasterConfiguration
kubernetesVersion: v1.11.3

并运行命令 kubeadm init --config file.yaml
一段时间后,它失败并显示以下错误:
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I1015 12:05:54.066140 27275 kernel_validator.go:81] Validating kernel version
I1015 12:05:54.066324 27275 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "kube-master-0" could not be reached
[WARNING Hostname]: hostname "kube-master-0" lookup kube-master-0 on 10.11.12.246:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.5.189]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master-0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master-0 localhost] and IPs [10.10.5.189 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- my.internal.repo:8082/kube-apiserver-amd64:v1.11.3
- my.internal.repo:8082/kube-controller-manager-amd64:v1.11.3
- my.internal.repo:8082/kube-scheduler-amd64:v1.11.3
- my.internal.repo:8082/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

我用 检查了 kubelet 状态systemctl 状态 kubelet ,它正在运行。

成功 试图手动拉取图像:
docker pull my.internal.repo:8082/kubee-apiserver-amd64:v1.11.3

但是,' docker ps -a 返回 ' 没有容器。

journalctl -xeu kubelet显示大量连接被拒绝并收到对 k8s.io 的请求,我正在努力理解根本错误。

有任何想法吗?

提前致谢!

编辑 1:
我尝试手动打开端口,但没有任何改变。
[centos@kube-master-0 ~]$ sudo firewall-cmd --zone=public --list-ports
6443/tcp 5000/tcp 2379-2380/tcp 10250-10252/tcp

我还将 kube 版本从 1.11.3 更改为 1.12.1,但没有任何变化。

编辑 2:
我意识到 kubelet 正试图从 k8s.io 存储库中提取,这意味着我只更改了 kubeadm 内部存储库。我需要对 kubelet 做同样的事情。
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.108764   24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to...on refused
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.110539 24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v...on refused

有任何想法吗?

最佳答案

你解决了一半的问题,可能最终的解决方案是编辑kubelet ( /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ) 初始化文件。您需要设置--pod_infra_container_image将引用通过内部存储库提取的暂停容器镜像的参数。图像名称将是这样的:my.internal.repo:8082/pause:[version] .

原因是 kubelet 无法获取新的图像标签来引用它。

关于kubernetes - kubeadm init 更改 imageRepository,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52817266/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com