- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个运行了8天的k8s集群,然后它开始表现异常。
我的具体问题是关于kube-proxy的。 kube-proxy没有更新iptables。
从kube-proxy日志中,我可以看到它无法连接到kubernetes-apiserver(在我的情况下,连接是kube-prxy-> Haproxy-> k8s API服务器)。但是 pods 显示为“正在运行”。
问题:如果无法向apiserver注册事件,我希望kube-proxy pod能够关闭。
如何通过 Activity 探针实现此行为?
注意:杀死 pod 后,kube-proxy可以正常工作。
库贝代理日志
sudo docker logs 1de375c94fd4 -f
W0910 15:18:22.091902 1 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0910 15:18:22.091962 1 feature_gate.go:226] feature gates: &{{} map[]}
time="2018-09-10T15:18:22Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.0-33-generic/modules.dep.bin'\nmodprobe: WARNING: Module ip_vs not found in directory /lib/modules/4.15.0-33-generic`, error: exit status 1"
time="2018-09-10T15:18:22Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."
I0910 15:18:22.185086 1 server.go:409] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0910 15:18:22.186885 1 server_others.go:140] Using iptables Proxier.
W0910 15:18:22.438408 1 server.go:601] Failed to retrieve node info: nodes "$(node_name)" not found
W0910 15:18:22.438494 1 proxier.go:306] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I0910 15:18:22.438595 1 server_others.go:174] Tearing down inactive rules.
I0910 15:18:22.861478 1 server.go:444] Version: v1.10.2
I0910 15:18:22.867003 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 2883584
I0910 15:18:22.867046 1 conntrack.go:52] Setting nf_conntrack_max to 2883584
I0910 15:18:22.867267 1 conntrack.go:83] Setting conntrack hashsize to 720896
I0910 15:18:22.893396 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0910 15:18:22.893505 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0910 15:18:22.893737 1 config.go:102] Starting endpoints config controller
I0910 15:18:22.893749 1 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0910 15:18:22.893742 1 config.go:202] Starting service config controller
I0910 15:18:22.893765 1 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0910 15:18:22.993904 1 controller_utils.go:1026] Caches are synced for endpoints config controller
I0910 15:18:22.993921 1 controller_utils.go:1026] Caches are synced for service config controller
W0910 16:13:28.276082 1 reflector.go:341] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: watch of *core.Endpoints ended with: very short watch: k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Unexpected watch close - watch lasted less than a second and no items received
W0910 16:13:28.276083 1 reflector.go:341] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: watch of *core.Service ended with: very short watch: k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Unexpected watch close - watch lasted less than a second and no items received
E0910 16:13:29.276678 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:29.276677 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:30.277201 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:30.278009 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:31.277723 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:31.278574 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:32.278197 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:32.279134 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:33.278684 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://127.0.0.1:6553/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
E0910 16:13:33.279587 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://127.0.0.1:6553/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6553: getsockopt: connection refused
最佳答案
Question: I am expecting kube-proxy pod to be down if it is not able to register with apiserver for events.
How do I achieve this behavior via liveness probes?
kube-proxy
DaemonSet中,但是不建议这样做:
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
image: k8s.gcr.io/kube-proxy-amd64:v1.11.2
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
livenessProbe:
exec:
command:
- curl <apiserver>:10256/healthz
initialDelaySeconds: 5
periodSeconds: 5
--healthz-port
。
关于kubernetes - kube-proxy不更新iptables,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52494732/
我正在使用 rke在私有(private)云中生成 Kubernetes 集群。它产生 kube_config_cluster.yml文件。有没有办法将此配置添加到我的 $HOME/.kube/con
我尝试在我的桌面(Ubuntu 18)上运行 OKD。我按照指示:https://opensource.com/article/18/11/local-okd-cluster-linux (simil
我在我的 k8s 中使用 calico 作为 CNI,我试图在 3 个服务器中部署一个主集群。我用的是kubeadm,关注官方setup guide .但是发生了一些错误,kube-controlle
Fresh Kubernetes (1.10.0) 集群使用 kubeadm (1.10.0) 安装在 RHEL7 裸机虚拟机上 Linux 3.10.0-693.11.6.el7.x86_64 #1
我使用 kubeadm 安装了 kubernetes .为了启用基本身份验证,我添加了 --basic-auth-file=/etc/kubernetes/user-password.txt在我的 /
我尝试使用 minikube start 启动本地 Kubernetes 集群并收到以下错误。 Starting local Kubernetes v1.10.0 cluster... Startin
我用了this tutorial在我的 Raspberry 3 上设置一个 kubernetes 集群。 我按照说明进行操作,直到设置 flannel 为止: curl -sSL https://ra
我有一个本地 kubernetes 集群 v1.22.1(1 个主节点和 2 个工作节点),并且想使用 jenkins 上的 kubernetes 插件在这个 kubernetes 集群上运行 jen
我只是尝试运行一个简单的批处理作业并收到此错误“卷“kube-api-access-cvwdt”的 MountVolume.SetUp 失败:对象“default”/“kube-root-ca.crt
我只是尝试运行一个简单的批处理作业并收到此错误“卷“kube-api-access-cvwdt”的 MountVolume.SetUp 失败:对象“default”/“kube-root-ca.crt
我正在用KIND测试K8。。我创建了集群:。现在我想用sudo Kind删除集群来删除这个集群,但得到的是:。但是当我转到路径时,我看不到文件:。配置文件:。另外,当调用命令sudo种类删除集群--名
我在用kind测试k8。我创建了集群:。现在我想用sudo Kind删除集群来删除这个集群,但得到的是:。但当转到路径时,我没有看到文件:。配置文件:。另外,当调用命令sudo种类删除集群--名称节点
简介 kube-proxy 是 Kubernetes 集群中负责服务发现和负载均衡的组件之一。它是一个网络代理,运行在每个节点上, 用于 service 资源的负载均衡。它有两种模式:iptable
本文分享自华为云社区《kube-apiserver限流机制原理》,作者:可以交个朋友。 背景 apiserver是kubernetes中最重要的组件,一旦遇到恶意刷接口或请求量超过承载范围,api
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
kube-scheduler组件是kubernetes中的核心组件之一,主要负责pod资源对象的调度工作,具体来说,kube-scheduler组件负责根据调度算法(包括预选算法和优选算法)将未调度的
我通过docker-multinode设置了k8s $ https_proxy=http://10.25.30.127:7777 IP_ADDRESS=10.25.24.116 MASTER_IP=1
kube-proxy 有一个名为 --proxy-mode 的选项,根据帮助信息,该选项可以是 userspace 或 iptables。(见下文) # kube-proxy -h Usage of
在单节点Kubernetes集群上安装Kube-router时,遇到以下问题: kube-system kube-router-wnnq8 0/1
我是一名优秀的程序员,十分优秀!