gpt4 book ai didi

docker - 不小心耗尽了Kubernetes中的所有节点(甚至是主节点)。我该如何带回Kubernetes?

转载 作者:行者123 更新时间:2023-12-02 12:00:48 36 4
gpt4 key购买 nike

我不小心耗尽了Kubernetes中的所有节点(甚至是主节点)。我该如何带回Kubernetes? kubectl不再工作:

kubectl get nodes
结果:
The connection to the server 172.16.16.111:6443 was refused - did you specify the right host or port?
这是主节点(node1)上的 systemctl status kubelet的输出:
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-23 21:42:39 UTC; 25min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 15541 (kubelet)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/kubelet.service
└─15541 /usr/local/bin/kubelet --logtostderr=true --v=2 --node-ip=172.16.16.111 --hostname-override=node1 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/etc/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/kubelet.conf --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.1 --runtime-cgroups=/systemd/system.slice --cpu-manager-policy=static --kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi --system-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin

Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.330009 15541 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.330201 15541 setters.go:73] Using node IP: "172.16.16.111"
Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.331475 15541 kubelet_node_status.go:472] Recording NodeHasSufficientMemory event message for node node1
Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.331494 15541 kubelet_node_status.go:472] Recording NodeHasNoDiskPressure event message for node node1
Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.331500 15541 kubelet_node_status.go:472] Recording NodeHasSufficientPID event message for node node1
Jun 23 22:08:34 node1 kubelet[15541]: I0623 22:08:34.331661 15541 policy_static.go:244] [cpumanager] static policy: RemoveContainer (container id: 6dd59735cabf973b6d8b2a46a14c0711831daca248e918bfcfe2041420931963)
Jun 23 22:08:34 node1 kubelet[15541]: E0623 22:08:34.332058 15541 pod_workers.go:191] Error syncing pod 93ff1a9840f77f8b2b924a85815e17fe ("kube-apiserver-node1_kube-system(93ff1a9840f77f8b2b924a85815e17fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-node1_kube-system(93ff1a9840f77f8b2b924a85815e17fe)"
Jun 23 22:08:34 node1 kubelet[15541]: E0623 22:08:34.427587 15541 kubelet.go:2267] node "node1" not found
Jun 23 22:08:34 node1 kubelet[15541]: E0623 22:08:34.506152 15541 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://172.16.16.111:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.16.111:6443: connect: connection refused
Jun 23 22:08:34 node1 kubelet[15541]: E0623 22:08:34.527813 15541 kubelet.go:2267] node "node1" not found
我正在使用Ubuntu 18.04,并且群集中有7个计算节点。全部耗尽(偶然,有点!)!我已经使用Kubespray安装了K8s集群。
有什么办法可以解开这些节点中的任何一个?这样就可以安排k8个必要的广告连播。
任何帮助,将不胜感激。
更新:
我在这里问了一个关于如何连接到etcd的单独问题: Can't connect to the ETCD of Kubernetes

最佳答案

如果您有生产或“实时”工作负载,最好的安全方法是配置新群集并逐步切换工作负载。
Kubernetes将其状态保存在etcd中,因此您可能会连接到etcd并清除“已耗尽”状态,但是您可能必须查看源代码,并查看发生在哪里以及特定键/值存储在etcd中的位置。
您共享的日志基本上表明kube-apiserver无法启动,因此它可能试图连接到etcd / startup,而etcd告诉它:“您无法在此节点上启动,因为它已被耗尽”。
主机的典型启动顺序如下:

  • etcd
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

  • 您还可以按照任何指南连接到etcd,并查看是否可以进一步排除故障。例如, this one。然后,您可以自行检查/删除一些 node keys:
    /registry/minions/node-x1
    /registry/minions/node-x2
    /registry/minions/node-x3

    关于docker - 不小心耗尽了Kubernetes中的所有节点(甚至是主节点)。我该如何带回Kubernetes?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62544534/

    36 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com