gpt4 book ai didi

Kubernetes calico 节点 CrashLoopBackOff

转载 作者:行者123 更新时间:2023-12-02 12:21:34 26 4
gpt4 key购买 nike

虽然有一些像我一样的问题,但修复对我不起作用。
我正在使用 kubernetes v1.9.3 二进制文件并使用 flannel 和 calico 来设置 kubernetes 集群。应用 calico yaml 文件后,它会卡在创建第二个 pod 上。
我究竟做错了什么?日志并不清楚说明出了什么问题
kubectl get pods --all-namespaces

root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node-
n87l7 --namespace=kube-system
Error from server (BadRequest): a container name must be specified for pod
calico-node-n87l7, choose one of: [calico-node install-cni]
root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node-
n87l7 --namespace=kube-system install-cni
Installing any TLS assets from /calico-secrets
cp: can't stat '/calico-secrets/*': No such file or directory
kubectl describe pod calico-node-n87l7返回
Name:         calico-node-n87l7
Namespace: kube-system
Node: kube-master01/10.100.102.62
Start Time: Thu, 22 Feb 2018 15:21:38 +0100
Labels: controller-revision-hash=653023576
k8s-app=calico-node
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]

Status: Running
IP: 10.100.102.62
Controlled By: DaemonSet/calico-node
Containers:
calico-node:
Container ID: docker://6024188a667d98a209078b6a252505fa4db42124800baaf3a61e082ae2476147
Image: quay.io/calico/node:v3.0.1
Image ID: docker-pullable://quay.io/calico/node@sha256:e32b65742e372e2a4a06df759ee2466f4de1042e01588bea4d4df3f6d26d0581
Port: <none>
State: Running
Started: Thu, 22 Feb 2018 15:21:40 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 250m
Liveness: http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6
Readiness: http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
CALICO_DISABLE_FILE_LOGGING: true
CALICO_K8S_NODE_REF: (v1:spec.nodeName)
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
CALICO_IPV4POOL_CIDR: 10.244.0.0/16
CALICO_IPV4POOL_IPIP: Always
FELIX_IPV6SUPPORT: false
FELIX_LOGSEVERITYSCREEN: info
FELIX_IPINIPMTU: 1440
ETCD_CA_CERT_FILE: <set to the key 'etcd_ca' of config map 'calico-config'> Optional: false
ETCD_KEY_FILE: <set to the key 'etcd_key' of config map 'calico-config'> Optional: false
ETCD_CERT_FILE: <set to the key 'etcd_cert' of config map 'calico-config'> Optional: false
IP: autodetect
IP_AUTODETECTION_METHOD: can-reach=10.100.102.0
FELIX_HEALTHENABLED: true
Mounts:
/calico-secrets from etcd-certs (rw)
/lib/modules from lib-modules (ro)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro)
install-cni:
Container ID: docker://d9fd7a0f3fa9364c9a104c8482e3d86fc877e3f06f47570d28cd1b296303a960
Image: quay.io/calico/cni:v2.0.0
Image ID: docker-pullable://quay.io/calico/cni@sha256:ddb91b6fb7d8136d75e828e672123fdcfcf941aad61f94a089d10eff8cd95cd0
Port: <none>
Command:
/install-cni.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 22 Feb 2018 15:53:16 +0100
Finished: Thu, 22 Feb 2018 15:53:16 +0100
Ready: False
Restart Count: 11
Environment:
CNI_CONF_NAME: 10-calico.conflist
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
Mounts:
/calico-secrets from etcd-certs (rw)
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/binenter code here
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
etcd-certs:
Type: Secret (a volume populated by a Secret)
SecretName: calico-etcd-secrets
Optional: false
calico-node-token-p7d9n:
Type: Secret (a volume populated by a Secret)
SecretName: calico-node-token-p7d9n
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-net-dir"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "var-run-calico"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-bin-dir"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "lib-modules"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "calico-node-token-p7d9n"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "etcd-certs"
Normal Created 34m kubelet, kube-master01 Created container
Normal Pulled 34m kubelet, kube-master01 Container image "quay.io/calico/node:v3.0.1" already present on machine
Normal Started 34m kubelet, kube-master01 Started container
Normal Started 34m (x3 over 34m) kubelet, kube-master01 Started container
Normal Pulled 33m (x4 over 34m) kubelet, kube-master01 Container image "quay.io/calico/cni:v2.0.0" already present on machine
Normal Created 33m (x4 over 34m) kubelet, kube-master01 Created container
Warning BackOff 4m (x139 over 34m) kubelet, kube-master01 Back-off restarting failed container

最佳答案

我已经解决了这个问题。在我的情况下,问题是由于主节点和工作节点使用相同的 IP 地址。

我为 Master K8S 创建了 2 个 Ubuntu-VMs.1 VM,为工作节点创建了另一个 VM。
每个虚拟机都配置了 2 个 NAT 和 2 个桥接接口(interface)。
NAT 接口(interface)在两个 VM 中生成相同的 IP 地址。

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fe15:67e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:15:06:7e txqueuelen 1000 (Ethernet)
RX packets 1506 bytes 495894 (495.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1112 bytes 128692 (128.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

现在,当我使用以下命令创建 Calico-Node 时,Master 和 Worker 节点都使用相同的接口(interface)/IP,即 enp0s3
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

我怎么知道:

检查以下目录下的日志文件,并尝试确定节点是否使用相同的 IP 地址。
/var/log/container/
/var/log/pod/<failed_pod_id>/

如何解决:

确保主节点和工作节点使用不同的 IP。您可以在 VM 中禁用 NAT,也可以使用“静态且唯一”的 IP 地址。
然后重新启动系统。

关于Kubernetes calico 节点 CrashLoopBackOff,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48930555/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com