- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
诚挚的歉意。
我有一个4节点的Kubernetes集群,其中包含1个主节点和3个工作节点。我使用kubeconfig连接到kubernetes集群,因为昨天我无法使用kubeconfig连接。kubectl get pods
给出错误消息“与服务器api.xxxxx.xxxxxxxx.com的连接被拒绝-您指定了正确的主机或端口吗?”
在kubeconfig中将服务器名称指定为https://api.xxxxx.xxxxxxxx.com
注意:
请注意,因为有太多的https链接,所以我无法发布问题。因此,我已将https://重命名为https:-以避免后台分析部分中的链接。
我尝试从主节点运行kubectl
并收到类似的错误
与服务器localhost:8080的连接被拒绝-您是否指定了正确的主机或端口?
然后检查kube-apiserver docker ,它正在不断退出/ Crashloopbackoff。docker logs <container-id of kube-apiserver>
显示以下错误
W0914 16:29:25.761524 1 clientconn.go:1251] grpc:addrConn.createTransport failed to connect to {127.0.0.1:4001 0}. Err :connection error: desc = "transport: authenticationhandshake failed: x509: certificate has expired or is not yet valid".Reconnecting... F0914 16:29:29.319785 1 storage_decorator.go:57]Unable to create storage backend: config (&{etcd3 /registry{[https://127.0.0.1:4001]/etc/kubernetes/pki/kube-apiserver/etcd-client.key/etc/kubernetes/pki/kube-apiserver/etcd-client.crt/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt} false true0xc000266d80 apiextensions.k8s.io/v1beta1 5m0s 1m0s}), err(context deadline exceeded)
systemctl status kubelet
->提供了以下错误
Sep 14 16:40:49 ip-xxx-xxx-xx-xx kubelet[2411]: E0914 16:40:49.6935762411 kubelet_node_status.go:385] Error updating node status, willretry: error getting node"ip-xxx-xxx-xx-xx.xx-xxxxx-1.compute.internal": Gethttps://127.0.0.1/api/v1/nodes/ip-xxx-xxx-xx-xx.xx-xxxxx-1.compute.internal?timeout=10s:dial tcp 127.0.0.1:443: connect: connection refused
docker logs from the exited kube-scheduler
docker容器:
I0907 10:35:08.970384 1 scheduler.go:572] poddefault/k8version-1599474900-hrjcn is bound successfully on nodeip-xx-xx-xx-xx.xx-xxxxxx-x.compute.internal, 4 nodes evaluated, 3nodes were found feasible I0907 10:40:09.286831 1scheduler.go:572] pod default/k8version-1599475200-tshlx is boundsuccessfully on node ip-1x-xx-xx-xx.xx-xxxxxx-x.compute.internal, 4nodes evaluated, 3 nodes were found feasible I0907 10:44:01.935373
1 leaderelection.go:263] failed to renew leasekube-system/kube-scheduler: failed to tryAcquireOrRenew contextdeadline exceeded E0907 10:44:01.935420 1 server.go:252] lostmaster lost lease
I0907 10:40:19.703485 1 garbagecollector.go:518] delete object[v1/Pod, namespace: default, name: k8version-1599474300-5r6ph, uid:67437201-f0f4-11ea-b612-0293e1aee720] with propagation policyBackground I0907 10:44:01.937398 1 leaderelection.go:263] failedto renew lease kube-system/kube-controller-manager: failed totryAcquireOrRenew context deadline exceeded E0907 10:44:01.937506
1 leaderelection.go:306] error retrieving resource lockkube-system/kube-controller-manager: Get https:--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:net/http: request canceled (Client.Timeout exceeded while awaitingheaders) I0907 10:44:01.937456 1 event.go:209]Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system",Name:"kube-controller-manager",UID:"ba172d83-a302-11e9-b612-0293e1aee720", APIVersion:"v1",ResourceVersion:"85406287", FieldPath:""}): type: 'Normal' reason:'LeaderElection' ip-xxx-xx-xx-xxx_1dd3c03b-bd90-11e9-85c6-0293e1aee720stopped leading F0907 10:44:01.937545 1controllermanager.go:260] leaderelection lost I0907 10:44:01.949274
1 range_allocator.go:169] Shutting down range CIDR allocator I090710:44:01.949285 1 replica_set.go:194] Shutting down replicasetcontroller I0907 10:44:01.949291 1 gc_controller.go:86] Shuttingdown GC controller I0907 10:44:01.949304 1pvc_protection_controller.go:111] Shutting down PVC protectioncontroller I0907 10:44:01.949310 1 route_controller.go:125]Shutting down route controller I0907 10:44:01.949316 1service_controller.go:197] Shutting down service controller I090710:44:01.949327 1 deployment_controller.go:164] Shutting downdeployment controller I0907 10:44:01.949435 1garbagecollector.go:148] Shutting down garbage collector controllerI0907 10:44:01.949443 1 resource_quota_controller.go:295]Shutting down resource quota controller
E0915 21:51:36.028108 1 leaderelection.go:306] error retrievingresource lock kube-system/kube-controller-manager: Gethttps:--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:dial tcp 127.0.0.1:443: connect: connection refused E091521:51:40.133446 1 leaderelection.go:306] error retrievingresource lock kube-system/kube-controller-manager: Gethttps:--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:dial tcp 127.0.0.1:443: connect: connection refused
E0915 21:52:44.703587 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node:Get https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0: dialtcp 127.0.0.1:443: connect: connection refused E0915 21:52:44.704504
1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failedto list *v1.ReplicationController: Gethttps:--127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.705471 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service:Get https:--127.0.0.1/api/v1/services?limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.706477 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1.ReplicaSet: Get https:--127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.707581 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1.StorageClass: Get https:--127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.708599 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1.PersistentVolume: Get https:--127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.709687 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1.StatefulSet: Get https:--127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.710744 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1.PersistentVolumeClaim: Get https:--127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.711879 1 reflector.go:126]k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list*v1.Pod: Get https:--127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused E091521:52:44.712903 1 reflector.go:126]k8s.io/client-go/informers/factory.go:133: Failed to list*v1beta1.PodDisruptionBudget: Get https:--127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0:dial tcp 127.0.0.1:443: connect: connection refused
/etc/kubernetes/pki/kube-apiserver/etcd-client.crt
)已在2020年7月过期。与etcd-manager-main和events有关的其他过期证书很少(两个地方都是相同的证书副本),但我没有请参阅 list 文件中引用的内容。
/etc/kubernetes/pki/kube-apiserver/etcd-client.crt
替换了新证书。
docker logs from kube-apiserver container
(仍在崩溃)
F0916 08:09:56.753538 1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 /registry {[https:--127.0.0.1:4001] /etc/kubernetes/pki/kube-apiserver/etcd-client.key /etc/kubernetes/pki/kube-apiserver/etcd-client.crt /etc/kubernetes/pki/kube-apiserver/etcd-ca.crt} false true 0xc00095f050 apiextensions.k8s.io/v1beta1 5m0s 1m0s}), err (tls: private key does not match public key)
systemctl status kubelet
的输出
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.095615 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x.compute.internal" not found
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.130377 388 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.147390 388 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https:--127.0.0.1/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.195768 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x..compute.internal" not found
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.295890 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x..compute.internal" not found
Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.347431 388 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://127.0.0.1/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused
root@ip-xxx-xx-xx-xxx:~#
docker logs <etcd-manager-main container> --tail 20
I0916 14:41:40.349570 8221 peers.go:281] connecting to peer "etcd-a" with TLS policy, servername="etcd-manager-server-etcd-a"W0916 14:41:40.351857 8221 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3996: rpc error: code = Unavailable desc = all SubConns are in TransientFailureI0916 14:41:40.351878 8221 peers.go:347] was not able to connect to peer etcd-a: map[xxx.xx.xx.xxx:3996:true]W0916 14:41:40.351887 8221 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-aI0916 14:41:41.205763 8221 controller.go:173] starting controller iterationW0916 14:41:41.205801 8221 controller.go:149] unexpected error running etcd cluster reconciliation loop: cannot find self "etcd-a" in list of peers []I0916 14:41:45.352008 8221 peers.go:281] connecting to peer "etcd-a" with TLS policy, servername="etcd-manager-server-etcd-a"I0916 14:41:46.678314 8221 volumes.go:85] AWS API Request: ec2/DescribeVolumesI0916 14:41:46.739272 8221 volumes.go:85] AWS API Request: ec2/DescribeInstancesI0916 14:41:46.786653 8221 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.xxxxx.xxxxxxx.com:[xxx.xx.xx.xxx xxx.xx.xx.xxx]], final=map[xxx.xx.xx.xxx:[etcd-a.internal.xxxxx.xxxxxxx.com etcd-a.internal.xxxxx.xxxxxxx.com]]I0916 14:41:46.786724 8221 hosts.go:181] skipping update of unchanged /etc/hosts
root@ip-xxx-xx-xx-xxx:~#
docker logs <etcd-manager-events container> --tail 20
W0916 14:42:40.294576 8316 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-events-aI0916 14:42:41.106654 8316 controller.go:173] starting controller iterationW0916 14:42:41.106692 8316 controller.go:149] unexpected error running etcd cluster reconciliation loop: cannot find self "etcd-events-a" in list of peers []I0916 14:42:45.294682 8316 peers.go:281] connecting to peer "etcd-events-a" with TLS policy, servername="etcd-manager-server-etcd-events-a"W0916 14:42:45.297094 8316 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3997: rpc error: code = Unavailable desc = all SubConns are in TransientFailureI0916 14:42:45.297117 8316 peers.go:347] was not able to connect to peer etcd-events-a: map[xxx.xx.xx.xxx:3997:true]I0916 14:42:46.791923 8316 volumes.go:85] AWS API Request: ec2/DescribeVolumesI0916 14:42:46.856548 8316 volumes.go:85] AWS API Request: ec2/DescribeInstancesI0916 14:42:46.945119 8316 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.xxxxx.xxxxxxx.com:[xxx.xx.xx.xxx xxx.xx.xx.xxx]], final=map[xxx.xx.xx.xxx:[etcd-events-a.internal.xxxxx.xxxxxxx.com etcd-events-a.internal.xxxxx.xxxxxxx.com]]I0916 14:42:50.297264 8316 peers.go:281] connecting to peer "etcd-events-a" with TLS policy, servername="etcd-manager-server-etcd-events-a"W0916 14:42:50.300328 8316 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3997: rpc error: code = Unavailable desc = all SubConns are in TransientFailureI0916 14:42:50.300348 8316 peers.go:347] was not able to connect to peer etcd-events-a: map[xxx.xx.xx.xxx:3997:true]W0916 14:42:50.300360 8316 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-events-a
最佳答案
我认为这与ETCD有关。您可能已经为Kubernetes组件续订了证书,但是对ETCD做了同样的事情吗?
您的API服务器正在尝试连接到ETCD,并给出:
tls: private key does not match public key)
由于您只有1个etcd(假设主节点的数量),因此在尝试修复它之前将对其进行备份。
关于docker - kube-apiserver docker持续重启,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63910627/
我正在使用 rke在私有(private)云中生成 Kubernetes 集群。它产生 kube_config_cluster.yml文件。有没有办法将此配置添加到我的 $HOME/.kube/con
我尝试在我的桌面(Ubuntu 18)上运行 OKD。我按照指示:https://opensource.com/article/18/11/local-okd-cluster-linux (simil
我在我的 k8s 中使用 calico 作为 CNI,我试图在 3 个服务器中部署一个主集群。我用的是kubeadm,关注官方setup guide .但是发生了一些错误,kube-controlle
Fresh Kubernetes (1.10.0) 集群使用 kubeadm (1.10.0) 安装在 RHEL7 裸机虚拟机上 Linux 3.10.0-693.11.6.el7.x86_64 #1
我使用 kubeadm 安装了 kubernetes .为了启用基本身份验证,我添加了 --basic-auth-file=/etc/kubernetes/user-password.txt在我的 /
我尝试使用 minikube start 启动本地 Kubernetes 集群并收到以下错误。 Starting local Kubernetes v1.10.0 cluster... Startin
我用了this tutorial在我的 Raspberry 3 上设置一个 kubernetes 集群。 我按照说明进行操作,直到设置 flannel 为止: curl -sSL https://ra
我有一个本地 kubernetes 集群 v1.22.1(1 个主节点和 2 个工作节点),并且想使用 jenkins 上的 kubernetes 插件在这个 kubernetes 集群上运行 jen
我只是尝试运行一个简单的批处理作业并收到此错误“卷“kube-api-access-cvwdt”的 MountVolume.SetUp 失败:对象“default”/“kube-root-ca.crt
我只是尝试运行一个简单的批处理作业并收到此错误“卷“kube-api-access-cvwdt”的 MountVolume.SetUp 失败:对象“default”/“kube-root-ca.crt
我正在用KIND测试K8。。我创建了集群:。现在我想用sudo Kind删除集群来删除这个集群,但得到的是:。但是当我转到路径时,我看不到文件:。配置文件:。另外,当调用命令sudo种类删除集群--名
我在用kind测试k8。我创建了集群:。现在我想用sudo Kind删除集群来删除这个集群,但得到的是:。但当转到路径时,我没有看到文件:。配置文件:。另外,当调用命令sudo种类删除集群--名称节点
简介 kube-proxy 是 Kubernetes 集群中负责服务发现和负载均衡的组件之一。它是一个网络代理,运行在每个节点上, 用于 service 资源的负载均衡。它有两种模式:iptable
本文分享自华为云社区《kube-apiserver限流机制原理》,作者:可以交个朋友。 背景 apiserver是kubernetes中最重要的组件,一旦遇到恶意刷接口或请求量超过承载范围,api
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
我通过docker-multinode设置了k8s $ https_proxy=http://10.25.30.127:7777 IP_ADDRESS=10.25.24.116 MASTER_IP=1
kube-proxy 有一个名为 --proxy-mode 的选项,根据帮助信息,该选项可以是 userspace 或 iptables。(见下文) # kube-proxy -h Usage of
在单节点Kubernetes集群上安装Kube-router时,遇到以下问题: kube-system kube-router-wnnq8 0/1
我是一名优秀的程序员,十分优秀!