- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在三个虚拟机上设置 kubernetes(来自 centos7 的教程),
不幸的是, worker 的加入失败了。我希望有人已经遇到过这个问题(在网上找到了两次都没有答案),或者可能猜到出了什么问题。
这是我通过 kubeadm join 得到的:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0902 20:31:15.401693 2032 kernel_validator.go:81] Validating kernel version
I0902 20:31:15.401768 2032 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.30:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
[discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
[discovery] Successfully established connection with API Server "192.168.1.30:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[root@k8s-worker1 nodesetup]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
Docs: https://kubernetes.io/docs/
Main PID: 2093 (kubelet)
Tasks: 7
Memory: 12.1M
CGroup: /system.slice/kubelet.service
└─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010 2093 server.go:408] Version: v1.11.2
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314 2093 plugins.go:97] No cloud provider specified.
[root@k8s-worker1 nodesetup]#
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload
echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2
echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
echo "install docker ce"
yum install -y docker-ce
echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y
echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet
echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
# Make changes effective
sudo sysctl --system
[root@k8s-worker1 ~]# journalctl -xeu kubelet
Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059 3082 server.go:408] Version: v1.11.2
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214 3082 plugins.go:97] No cloud provider specified.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469 3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-79n2m 0/1 Pending 0 1d
kube-system coredns-78fcdf6894-tlngr 0/1 Pending 0 1d
kube-system etcd-k8s-master 1/1 Running 3 1d
kube-system kube-apiserver-k8s-master 1/1 Running 0 1d
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 1d
kube-system kube-proxy-2x8cx 1/1 Running 3 1d
kube-system kube-scheduler-k8s-master 1/1 Running 0 1d
[root@k8s-master ~]#
#!/bin/sh
echo '192.168.1.30 k8s-master' >> /etc/hosts
echo '192.168.1.40 k8s-worker1' >> /etc/hosts
echo '192.168.1.50 k8s-worker2' >> /etc/hosts
#!/bin/sh
echo "Setting up internal Interface"
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
IPADDR=192.168.1.30
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=enp0s8
EOF
echo "Activate interface"
ifup enp0s8
#!/bin/sh
echo "Setting hostname und deactivate SELinux"
hostnamectl set-hostname 'k8s-master'
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"
#!/bin/sh
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
#!/bin/sh
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#!/bin/sh
echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2
echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
echo "install docker ce"
yum install -y docker-ce
echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y
echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet
echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
# Make changes effective
sudo sysctl --system
#!/bin/sh
echo "Init kubernetes. Check join cmd in initProtocol.txt"
kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt
Init kubernetes. Check join cmd in initProtocol.txt
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0904 21:53:15.271999 1526 kernel_validator.go:81] Validating kernel version
I0904 21:53:15.272165 1526 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.504792 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172
#!/bin/bash
echo "Configure demo cluster usage as root"
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# Deploy-Network using flanel
# Taken from first matching two tutorials on the web
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
# taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
echo "Try to run kubectl get pods --all-namespaces"
echo "After joining nodes: try to run kubectl get nodes to verify the status"
Configure demo cluster usage as root
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Try to run kubectl get pods --all-namespaces
After joining nodes: try to run kubectl get nodes to verify the status
[root@k8s-master nodesetup]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 33m
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 33m
kube-system etcd-k8s-master 1/1 Running 0 27m
kube-system kube-apiserver-k8s-master 1/1 Running 0 27m
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 27m
kube-system kube-proxy-stfxm 1/1 Running 0 28m
kube-system kube-scheduler-k8s-master 1/1 Running 0 27m
[root@k8s-master nodesetup]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 35m v1.11.2
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 10h
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 10h
kube-system etcd-k8s-master 1/1 Running 0 10h
kube-system kube-apiserver-k8s-master 1/1 Running 0 10h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 10h
kube-system kube-flannel-ds-amd64-crljm 0/1 Pending 0 1s
kube-system kube-flannel-ds-v6gcx 0/1 Pending 0 0s
kube-system kube-proxy-l2dck 0/1 Pending 0 0s
kube-system kube-scheduler-k8s-master 1/1 Running 0 10h
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 10h v1.11.2
[root@k8s-master ~]#
最佳答案
看来你的kubeadm token
根据 kubelet
已过期附上日志。
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469
3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
kubeadm init
后保持 24 小时已发布,请查看
link了解更多信息。
CoreDNS
服务处于待处理状态,请查看
kubeadm
故障排除
document为了检查是否有
Pod network提供程序已安装在您的集群上。
kubeadm token
并从头开始引导集群系统模块。
关于kubernetes - kubeadm 加入失败并显示 http ://localhost:10248/healthz connection refused,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52140852/
这个问题已经有答案了: AWS RDS How to set up a MySQL Database (1 个回答) 已关闭 6 年前。 我有一个AWS Elastic Beanstalk实例 Tom
我正在 Mean.js 版本 4.2 上运行 npm test,它在 Protractor e2e 测试中给出了“连接被拒绝”错误。我尝试更新 Selenium 像 this says to.现在它的
我购买了一台服务器(新加坡地区,512 MB RAM)我试图在该服务器中设置单节点集群。我从这个 link当我检查 $ sudo service Cassandra status它正在显示 nodet
我正在使用 spring cloud 来配置微服务。我使用 Jhipster 来生成应用程序。 我有三个应用程序 JHipster-Registry、Gateway 和 admin 应用程序 所有三个
我有 2 个 java 文件 Server.java 和 Client.java。两者都在不同的容器中。 DOCKER 文件:我用于服务器的 dockerfile(在名为“Server”的文件夹中)是
客户端应用程序将 json 数据发送到 localhost:8080 上的服务器,该数据打包并作为 Docker 镜像运行。使用 Postman chrome 应用程序手动发送 json 时,服务器工
我正在尝试关注 celery tutorial ,但是当我运行 python manage.py celeryd 时遇到了一个问题:我的 RabbitMQ 服务器(安装在我开发箱的虚拟机上)不允许我的
我清理了~/.ivy2/cache目录。 我的project/plugins.sbt文件: $ cat project/plugins.sbt // Comment to get more infor
ELK 与销售人员 URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Po
我正在另一个容器中使用 Dockerize spring boot 应用程序和 redis。 我使用 docker compose 在同一网络中运行两个容器,这是我的 docker-compose.y
我试图从我的应用程序访问 Rest API。当该 API 不在线时,我的应用程序发生了上述错误。我正在使用 Spring Boot 。我想知道,有什么方法可以在访问该网址之前检查该网址的可用性。 S
在我的 flutter 应用程序中,我使用 flask 服务器进行测试。我启动了我的服务器并在我的 flutter 应用程序中运行 API url。但是 SocketException: Connec
我正在 Mac 上的 IntelliJ 中设置远程调试器。我没有做任何修改就遵循了模板。然后我单击“调试 xxx”按钮。表明 "Error running 'Remote Debugger': Una
我有一个 linux box,jenkins 服务器在上面运行并从 git 中拉取脚本并开始执行 我有一台 Windows 电脑,我从它打开 jenkins url 说 xyz:8080 并尝试从我的
我在 tomcat 7.50 上有一个应用程序,在单个请求上工作正常,但在许多同时请求 (~1200) 上我得到: 2014-11-02 11:22:48,485 ERROR [MONITOR-AG
当我尝试重新配置我的 gitlab 实例时,出现了这个错误。sudo gitlab-ctl reconfigure 工作正常但是当我尝试启动 gitlab 时我看到 502 错误并且当我跟踪日志时我看
我在 3 个 n1-standard-4 GKE 实例上运行了大约 200 个 pod。流量水平较低,因此每台机器上都有大量备用 CPU 和 RAM。通常当服务尝试相互连接时,连接失败并显示“CONN
我在unbundu机器中使用JMeter设置了分布式负载测试环境。 ->主站:系统运行JMeter GUI,控制每个从站。 ->从站:运行jmeter-server的系统,从主站接收命令,并向被测服务
我需要帮助来解决被拒绝的状态。我查看了 named.conf,一切正常。 我什至把allow-query改成了any,原来是localhost。 dig xxx.com @ns1.xxx.com ;
我正在使用Laravel。当我dd($request->all())时,其中的数据涉及文件和其他一些数据。它返回错误 [2019-02-22 19:40:24] local.ERROR: stream
我是一名优秀的程序员,十分优秀!