- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在使用 centos7 并使用以下命令在主节点中引导 Kubernetes 控制平面:
kubeadm init --pod-network-cidr=192.168.0.0/16 -v=5
下面是错误:
I1124 11:11:51.842474 5446 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example of how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
systemctl 状态 kubelet 输出
[root@vm1 centos]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-11-25 07:28:41 UTC; 9s ago
Docs: https://kubernetes.io/docs/
Process: 4634 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 4634 (code=exited, status=255)
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: goroutine 509 [select]:
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc00045...0dff140)
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...5 +0x145
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...9 +0x4b9
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: goroutine 510 [select]:
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000453...0dff1a0)
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...57 +0xd4
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:28:42 vm1.novalocal kubelet[4634]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8...3 +0x53b
Hint: Some lines were ellipsized, use -l to show in full.
kubelet 日志输出
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x4b9
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: goroutine 507 [select]:
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000732c80, 0xc000df1800)
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Nov 25 07:33:08 vm1.novalocal kubelet[9576]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x53b
docker和kubectl的版本
[root@vm1 centos]# kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@vm1 centos]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false
Docker 信息
[root@vm1 centos]# docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 11
Server Version: 19.03.13
Storage Driver: devicemapper
Pool Name: docker-253:1-41943508-pool
Pool Blocksize: 65.54kB
Base Device Size: 10.74GB
Backing Filesystem: xfs
Udev Sync Supported: true
Data file: /dev/loop0
Metadata file: /dev/loop1
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 1.144GB
Data Space Total: 107.4GB
Data Space Available: 39.08GB
Metadata Space Used: 1.54MB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.146GB
Thin Pool Minimum Free Space: 10.74GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.170-RHEL7 (2020-03-24)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-229.7.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.703GiB
Name: vm1.novalocal
ID: OJIR:5IGM:GPJA:D4ZC:7UU6:SQUP:I424:JMAL:LNL5:EQB7:DKFH:XPSB
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
有人可以帮我解决这个问题吗?看起来 kubelet 显然工作不正常..但我不知道如何查看和修复它...
最佳答案
你有一个非常旧的 Linux 内核 released in 2015 .
没有容器可以使用它。
升级内核,重新安装 docker 和 kubernetes 工具,然后重试。
关于kubernetes - Kubeadm Init 在启动控制平面时失败(kubelet 未运行或健康),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64985452/
kubeadm工作原理-kubeadm init原理分析-kubeadm join原理分析。kubeadm是社区维护的Kubernetes集群一键部署利器,使用两条命令即可完成k8s集群中master
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许在 Stack Overflow 上提出有关专业服务器或网络相关基础设施管理的问题。您可以编辑问
我想在我现有的 kubernetes 集群(v1.10)上安装 kube-prometheus。在此之前,文档说我需要从 127.0.0.1 更改 Controller /调度程序的 IP 地址至 0
我关注了 Alex Ellis 的精彩 tutorial使用 kubeadm在 Raspberry Pi 上启动 K8s 集群。我不清楚当我希望对 Pis 进行电源循环时的最佳做法是什么。 我怀疑 s
我正在遵循a blog post使用kubeadm设置kubernetes集群。因此,我有一个使用网桥网络创建的Virtualbox,只需遵循说明即可。 我最初只是做了kubeadm init而没有用
Kubeadm 初始化问题。 配置数据版本: os -rhel7.5 env -onprem server docker - 19 kube - 18 控制台输出: [wait-control-pla
我试图 create a single master cluster with kubeadm在 CentOS 虚拟机中。 我想在主节点上安排 pod,所以我运行以下命令 kubectl taint
我有带有 3 个节点“VM”的 K8s 集群,在通过 kubespray“基于 kubeadm 的工具”安装的所有 3 个节点“未污染的主节点”上安装了 Etcd 进行主/工作 现在我想用另一个替换一
在kubeadm之前,我使用以下步骤将法兰绒ip&mtu值带给docker。 步骤1:停止Docker和Flannel 步骤2:启动Flannel并检查其状态; 步骤3:像这样更新Docker启动脚本
Kubeadm 初始化问题。 配置数据版本: os -rhel7.5 env -onprem server docker - 19 kube - 18 控制台输出: [wait-control-pla
kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,通过将集群的各个组件进行容器化安装管理,通过kubeadm的方式安装集群比二进制的方式安装要方便不少。 安装参
初始化时 kubeadm我收到以下错误。我也试过命令 kubeadm reset做之前kubadm init . Kubelet 也在运行,我使用的相同命令是 systemctl enable kub
目前正在玩 kubernetes,我需要自己在自己的硬件或云提供商上部署一个集群(我很想使用 GCE,但在不久的将来不可能)。 我看到 kubeadm 允许快速简单的集群引导,除了它只提供一个 kub
我正在 AKS 内的一个小型托管集群上摆弄 Kubernetes。 看起来我已经准备好进行部署,因为我的节点池已经在设置时配置并引导(或者看起来就是这样)。 我在这里遗漏了什么吗? 最佳答案 Do I
我能够使用 kubeadm 为 kubernetes 部署引导主节点,但我在 kubeadm join phase kubelet-start phase 中遇到错误: kubeadm --v=5 j
我在定义 https://github.com/pablotoledo/kubernetes-poc/blob/master/Vagrantfile 的 Vagrantfile 时遇到问题。 在这个
在将新节点添加到现有1.9.0群集时。 kubeadm给出此错误消息。 我的集群在Centos 7服务器上运行。 docker deamon正在运行,但未找到文件/var/run/dockershim
使用 kubeadm init使用默认配置选项初始化控制平面。 有没有办法查看它将用于控制平面的默认值/配置,如何查看该配置文件,以及它存储在哪里? 最佳答案 找到命令:(以防万一有人需要它) C02
我已经使用 kubeadm 设置了我的主节点. 现在我想运行 join在我的节点上执行命令,以便稍后加入集群。 我所要做的就是跑 kubeadm join --token --discovery-t
我正在尝试在 Debian 10 上运行的 3 个裸机节点(1 个主节点和 2 个工作节点)上使用 kubeadm 创建单个控制平面集群,并将 Docker 作为容器运行时。每个节点都有一个外部IP和
我是一名优秀的程序员,十分优秀!