gpt4 book ai didi

kubernetes - Kubeadm Init 在启动控制平面时失败(kubelet 未运行或健康)

转载 作者:行者123 更新时间:2023-12-04 08:31:50 25 4
gpt4 key购买 nike

我正在使用 centos7 并使用以下命令在主节点中引导 Kubernetes 控制平面:

kubeadm init --pod-network-cidr=192.168.0.0/16 -v=5

下面是错误:

I1124 11:11:51.842474    5446 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example of how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207

systemctl 状态 kubelet 输出

[root@vm1 centos]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-11-25 07:28:41 UTC; 9s ago
Docs: https://kubernetes.io/docs/
Process: 4634 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 4634 (code=exited, status=255)

Nov 25 07:28:42 vm1.novalocal kubelet[4634]: goroutine 509 [select]:
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc00045...0dff140)
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...5 +0x145
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...9 +0x4b9
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: goroutine 510 [select]:
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000453...0dff1a0)
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...57 +0xd4
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...3 +0x53b
Hint: Some lines were ellipsized, use -l to show in full.

kubelet 日志输出

Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x4b9
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: goroutine 507 [select]:
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000732c80, 0xc000df1800)
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x53b

docker和kubectl的版本

[root@vm1 centos]# kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@vm1 centos]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false

Docker 信息

[root@vm1 centos]# docker info
Client:
Debug Mode: false

Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 11
Server Version: 19.03.13
Storage Driver: devicemapper
Pool Name: docker-253:1-41943508-pool
Pool Blocksize: 65.54kB
Base Device Size: 10.74GB
Backing Filesystem: xfs
Udev Sync Supported: true
Data file: /dev/loop0
Metadata file: /dev/loop1
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 1.144GB
Data Space Total: 107.4GB
Data Space Available: 39.08GB
Metadata Space Used: 1.54MB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.146GB
Thin Pool Minimum Free Space: 10.74GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.170-RHEL7 (2020-03-24)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-229.7.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.703GiB
Name: vm1.novalocal
ID: OJIR:5IGM:GPJA:D4ZC:7UU6:SQUP:I424:JMAL:LNL5:EQB7:DKFH:XPSB
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

有人可以帮我解决这个问题吗?看起来 kubelet 显然工作不正常..但我不知道如何查看和修复它...

最佳答案

你有一个非常旧的 Linux 内核 released in 2015 .
没有容器可以使用它。
升级内核,重新安装 docker 和 kubernetes 工具,然后重试。

关于kubernetes - Kubeadm Init 在启动控制平面时失败(kubelet 未运行或健康),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64985452/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com