gpt4 book ai didi

kubernetes - kubeadm 加入失败并显示 http ://localhost:10248/healthz connection refused

转载 作者:行者123 更新时间:2023-12-02 11:44:37 26 4
gpt4 key购买 nike

我正在尝试在三个虚拟机上设置 kubernetes(来自 centos7 的教程),
不幸的是, worker 的加入失败了。我希望有人已经遇到过这个问题(在网上找到了两次都没有答案),或者可能猜到出了什么问题。

这是我通过 kubeadm join 得到的:

[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0902 20:31:15.401693 2032 kernel_validator.go:81] Validating kernel version
I0902 20:31:15.401768 2032 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.30:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
[discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
[discovery] Successfully established connection with API Server "192.168.1.30:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.

虽然 kublet 正在运行:
[root@k8s-worker1 nodesetup]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
Docs: https://kubernetes.io/docs/
Main PID: 2093 (kubelet)
Tasks: 7
Memory: 12.1M
CGroup: /system.slice/kubelet.service
└─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010 2093 server.go:408] Version: v1.11.2
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314 2093 plugins.go:97] No cloud provider specified.
[root@k8s-worker1 nodesetup]#

据我所知,worker 可以连接到 master,但它会尝试对一些尚未启动的本地 servlet 运行健康检查。有任何想法吗?

这是我为配置我的工作人员所做的:
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux


echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload


echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables


echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"


echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"

echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

感谢您提前提供任何帮助。

更新一

工作人员的 Journalctl 输出:
[root@k8s-worker1 ~]# journalctl -xeu kubelet
Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059 3082 server.go:408] Version: v1.11.2
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214 3082 plugins.go:97] No cloud provider specified.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469 3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.

master 端的 get pod 会导致:
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-79n2m 0/1 Pending 0 1d
kube-system coredns-78fcdf6894-tlngr 0/1 Pending 0 1d
kube-system etcd-k8s-master 1/1 Running 3 1d
kube-system kube-apiserver-k8s-master 1/1 Running 0 1d
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 1d
kube-system kube-proxy-2x8cx 1/1 Running 3 1d
kube-system kube-scheduler-k8s-master 1/1 Running 0 1d
[root@k8s-master ~]#

更新二
作为下一步,我在 master 端生成了一个新 token ,并在 join 命令中使用了这个 token 。尽管主 token 列表显示该 token 是有效的,但工作节点坚持认为主节点不知道该 token 或者它已过期......停止!是时候从头开始了,从主设置开始。

所以这就是我所做的:

1) 重启主虚拟机,意味着在 virtualbox 上安装全新的 centos7 (CentOS-7-x86_64-Minimal-1804.iso)。配置网络 von virtualbox:adapter1 作为主机系统的 NAT(用于安装这些东西),adapter2 作为内部网络(与 kubernetes 网络的主节点和工作节点同名)。

2) 安装新镜像后,基础接口(interface) enp0s3 未配置为在启动时运行(因此 ifup enp03s,并在/etc/sysconfig/network-script 中重新配置为在启动时运行)。

3)为内部kubernetes网络配置第二个接口(interface):

/etc/hosts:
#!/bin/sh
echo '192.168.1.30 k8s-master' >> /etc/hosts
echo '192.168.1.40 k8s-worker1' >> /etc/hosts
echo '192.168.1.50 k8s-worker2' >> /etc/hosts

通过“ip -color -human addr”识别了我的第二个接口(interface),它向我展示了我的 enp0S8,所以:
#!/bin/sh
echo "Setting up internal Interface"
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
IPADDR=192.168.1.30
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=enp0s8
EOF

echo "Activate interface"
ifup enp0s8

4) 主机名、交换、禁用 SELinux
#!/bin/sh
echo "Setting hostname und deactivate SELinux"
hostnamectl set-hostname 'k8s-master'
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

echo "disable swap"
swapoff -a

echo "### You need to edit /etc/fstab and comment the swapline!! ###"

这里有一些评论:我重新启动,因为我看到后来的预检检查似乎解析/etc/fstab 以查看交换不存在。此外,centos 似乎重新激活了 SElinux(我需要稍后检查)作为解决方法,我在每次重新启动后再次禁用它。

5) 建立所需的防火墙设置
#!/bin/sh
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload

echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

6)添加kubernetes仓库
#!/bin/sh
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

7)安装所需的包并配置服务
#!/bin/sh

echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

8) 初始化集群
#!/bin/sh
echo "Init kubernetes. Check join cmd in initProtocol.txt"
kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt

要在此处验证此命令的结果:
Init kubernetes. Check join cmd in initProtocol.txt
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0904 21:53:15.271999 1526 kernel_validator.go:81] Validating kernel version
I0904 21:53:15.272165 1526 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.504792 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172

备注:到目前为止对我来说看起来不错,虽然我有点担心最新的 docker-ce 版本可能会在这里造成麻烦......

9)部署pod网络
#!/bin/bash

echo "Configure demo cluster usage as root"
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# Deploy-Network using flanel
# Taken from first matching two tutorials on the web
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

echo "Try to run kubectl get pods --all-namespaces"
echo "After joining nodes: try to run kubectl get nodes to verify the status"

这是此命令的输出:
Configure demo cluster usage as root
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Try to run kubectl get pods --all-namespaces
After joining nodes: try to run kubectl get nodes to verify the status

所以我尝试了 kubectl get pods --all-namespaces 并且我得到了
[root@k8s-master nodesetup]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 33m
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 33m
kube-system etcd-k8s-master 1/1 Running 0 27m
kube-system kube-apiserver-k8s-master 1/1 Running 0 27m
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 27m
kube-system kube-proxy-stfxm 1/1 Running 0 28m
kube-system kube-scheduler-k8s-master 1/1 Running 0 27m


[root@k8s-master nodesetup]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 35m v1.11.2

嗯……我的主人怎么了?

一些观察:

有时我在开始运行 kubectl 时遇到连接被拒绝,我发现需要几分钟才能建立服务。但正因为如此,我在/var/log/firewalld 中查找并发现了很多这些:
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).

错误的 docker 版本? docker 安装设置似乎被破坏了。

还有什么我可以在master方面检查的...
会很晚 - 明天我试图再次加入我的 worker (在初始 token 期的 24 小时范围内)。

更新三(解决docker问题后)
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 10h
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 10h
kube-system etcd-k8s-master 1/1 Running 0 10h
kube-system kube-apiserver-k8s-master 1/1 Running 0 10h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 10h
kube-system kube-flannel-ds-amd64-crljm 0/1 Pending 0 1s
kube-system kube-flannel-ds-v6gcx 0/1 Pending 0 0s
kube-system kube-proxy-l2dck 0/1 Pending 0 0s
kube-system kube-scheduler-k8s-master 1/1 Running 0 10h
[root@k8s-master ~]#

主人现在看起来很开心
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 10h v1.11.2
[root@k8s-master ~]#

请继续关注...下类后我也在修复 docker/firewall 上的工作人员,并将尝试再次加入集群(现在知道如何在需要时发布新 token )。所以更新 IV 将在大约 10 小时后进行

最佳答案

看来你的kubeadm token根据 kubelet 已过期附上日志。

Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469
3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized



此 token 的 TTL 在命令 kubeadm init 后保持 24 小时已发布,请查看 link了解更多信息。

主节点的系统运行时组件看起来不健康,不确定集群是否可以正常运行。虽然 CoreDNS服务处于待处理状态,请查看 kubeadm故障排除 document为了检查是否有 Pod network提供程序已安装在您的集群上。

我建议重建集群以刷新 kubeadm token并从头开始引导集群系统模块。

关于kubernetes - kubeadm 加入失败并显示 http ://localhost:10248/healthz connection refused,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52140852/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com