gpt4 book ai didi

docker - Kubernetes节点ulimit设置

转载 作者:行者123 更新时间:2023-12-02 11:54:13 44 4
gpt4 key购买 nike

我正在运行Kubernets v1.11.1集群,有时我的kube-apiserver服务器开始抛出“打开文件太多”消息。我注意到许多开放的TCP连接节点kubelet端口10250

我的服务器配置了65536个文件描述符。我是否需要增加容器主机的打开文件数?容器主机的建议ulimit设置是什么?

API服务器日志消息

I1102 13:57:08.135049       1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:09.135191 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:10.135437 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:11.135589 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:12.135755 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s

我的主机ulimit值:
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 64
-p: processes unlimited
-n: file descriptors 65536
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0

谢谢
SR

最佳答案

65536似乎有点低,尽管有许多应用程序建议使用该数字。这是我在一个用于kube-apiserver的K8s集群上的东西:

# kubeapi-server-container
# |
# \|/
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 16384
-p: processes unlimited
-n: file descriptors 1048576 <====
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0

与常规bash处理系统的限制不同:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15447
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024 <===
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15447
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

但是整个系统的总最大值为:
$ cat /proc/sys/fs/file-max
394306

如果看到 this,没有什么可以超过系统上的 /proc/sys/fs/file-max,因此我还将检查该值。我还将检查正在使用的文件描述符的数量(第一列),这将使您了解您有多少个打开的文件:
$ cat /proc/sys/fs/file-nr
2176 0 394306

关于docker - Kubernetes节点ulimit设置,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53124816/

44 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com