gpt4 book ai didi

kubernetes - 尽管启用了自动缩放,节点池不会将其节点大小减少到零

转载 作者:行者123 更新时间:2023-12-04 20:29:35 25 4
gpt4 key购买 nike

我创建了两个节点池。一小部分用于所有 google 系统作业,另一部分用于我的任务。工作完成后,较大的应该将其大小减小到 0。

The problem is: Even if there are no cron jobs, the node pool do not reduce his size to 0.



正在创建集群:
gcloud beta container --project "projectXY" clusters create "cluster" --zone "europe-west3-a" --username "admin" --cluster-version "1.9.6-gke.0" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/cloud-platform" --num-nodes "1" --network "default" --enable-cloud-logging --enable-cloud-monitoring --subnetwork "default" --enable-autoscaling --enable-autoupgrade --min-nodes "1" --max-nodes "1"

创建节点池:

完成所有任务后,节点池应将其大小减少到 0。
gcloud container node-pools create workerpool --cluster=cluster --machine-type="n1-highmem-8", -m "n1-highmem-8" --zone=europe-west3-a, -z europe-west3-a --disk-size=100 --enable-autoupgrade --num-nodes=0 --enable-autoscaling --max-nodes=2 --min-nodes=0

创建定时任务:
kubectl create -f cronjob.yaml

最佳答案

引自谷歌 Documentation :

"Note: Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads. However, while a node pool can scale to a zero size, the overall cluster size does not scale down to zero nodes (as at least one node is always required to run system Pods)."


另请注意:

"Cluster autoscaler also measures the usage of each node against the node pool's total demand for capacity. If a node has had no new Pods scheduled on it for a set period of time, and [this option does not work for you since it is the last node] all Pods running on that node can be scheduled onto other nodes in the pool , the autoscaler moves the Pods and deletes the node.

Note that cluster autoscaler works based on Pod resource requests, that is, how many resources your Pods have requested. Cluster autoscaler does not take into account the resources your Pods are actively using. Essentially, cluster autoscaler trusts that the Pod resource requests you've provided are accurate and schedules Pods on nodes based on that assumption."


因此我会检查:
  • 你的 Kubernetes 集群版本至少是 1.7
  • 最后一个节点上没有运行 Pod(检查每个命名空间,每个节点上必须运行的 Pod 不算:fluentd、kube-dns、kube-proxy),没有 cronjobs 是不够的
  • 对于自动缩放器是 不是 为相应的托管实例组启用,因为它们是不同的工具
  • 没有 Pod 卡在任何奇怪的状态仍然分配给该节点
  • 集群中没有等待调度的 Pod

  • 如果仍然一切可能是自动缩放器的问题,您可以打开一个 private issue 指定您的项目 ID 与谷歌,因为社区无能为力。
    如果您有兴趣在评论中放置问题跟踪器的链接,我将查看您的项目(我为 Google Cloud Platform Support 工作)

    关于kubernetes - 尽管启用了自动缩放,节点池不会将其节点大小减少到零,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49903951/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com