gpt4 book ai didi

kubernetes - Kubernetes 调度器依赖于哪些指标?

转载 作者:行者123 更新时间:2023-12-05 02:39:27 37 4
gpt4 key购买 nike

Kubernetes 调度程序是仅根据请求的资源和节点在服务器当前快照中的可用资源将 Pod 放置在节点上,还是同时考虑节点的历史资源利用率?

最佳答案

在官方Kubernetes documentation我们可以找到 kube-scheduler 用于为 pod 选择节点的过程和指标。

基本上这是一个两步过程:

kube-scheduler selects a node for the pod in a 2-step operation:

  1. Filtering
  2. Scoring

过滤步骤负责获取实际能够运行 pod 的节点列表:

The filtering step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.

评分步骤负责从过滤步骤生成的列表中选择最佳节点:

In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.

Finally, kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, kube-scheduler selects one of these at random.

当得分最高的节点被选中时,调度器通知API服务器:

...picks a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called binding.

Factors that are taken into consideration for scheduling :

  • 个人和集体资源需求
  • 硬件
  • 政策约束
  • 亲和性和反亲和性规范
  • 数据本地化
  • 工作负载间干扰
  • 其他...

有关参数的更多详细信息,请参见here :

The following predicates implement filtering:

  • PodFitsHostPorts: Checks if a Node has free ports (the network protocol kind) for the Pod ports the Pod is requesting.
  • PodFitsHost: Checks if a Pod specifies a specific Node by its hostname.
  • PodFitsResources: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.
  • MatchNodeSelector: Checks if a Pod's Node Selector matches the Node's label(s).
  • NoVolumeZoneConflict: Evaluate if the Volumes that a Pod requests are available on the Node, given the failure zone restrictions for that storage.
  • NoDiskConflict: Evaluates if a Pod can fit on a Node due to the volumes it requests, and those that are already mounted.
  • MaxCSIVolumeCount: Decides how many CSI volumes should be attached, and whether that's over a configured limit.
  • PodToleratesNodeTaints: checks if a Pod's tolerations can tolerate the Node's taints.
  • CheckVolumeBinding: Evaluates if a Pod can fit due to the volumes it requests. This applies for both bound and unbound PVCs.

The following priorities implement scoring:

  • SelectorSpreadPriority: Spreads Pods across hosts, considering Pods that belong to the same Service, StatefulSet or ReplicaSet.
  • InterPodAffinityPriority: Implements preferred inter pod affininity and antiaffinity.
  • LeastRequestedPriority: Favors nodes with fewer requested resources. In other words, the more Pods that are placed on a Node, and the more resources those Pods use, the lower the ranking this policy will give.
  • MostRequestedPriority: Favors nodes with most requested resources. This policy will fit the scheduled Pods onto the smallest number of Nodes needed to run your overall set of workloads.
  • RequestedToCapacityRatioPriority: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.
  • BalancedResourceAllocation: Favors nodes with balanced resource usage.
  • NodePreferAvoidPodsPriority: Prioritizes nodes according to the node annotation scheduler.alpha.kubernetes.io/preferAvoidPods. You can use this to hint that two different Pods shouldn't run on the same Node.
  • NodeAffinityPriority: Prioritizes nodes according to node affinity scheduling preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. You can read more about this in Assigning Pods to Nodes.
  • TaintTolerationPriority: Prepares the priority list for all the nodes, based on the number of intolerable taints on the node. This policy adjusts a node's rank taking that list into account.
  • ImageLocalityPriority: Favors nodes that already have the container images for that Pod cached locally.
  • ServiceSpreadingPriority: For a given Service, this policy aims to make sure that the Pods for the Service run on different nodes. It favours scheduling onto nodes that don't have Pods for the service already assigned there. The overall outcome is that the Service becomes more resilient to a single Node failure.
  • EqualPriority: Gives an equal weight of one to all nodes.
  • EvenPodsSpreadPriority: Implements preferred pod topology spread constraints.

回答你的问题:

Does it take into account the node's historical resource utilization?

可以看到,在上面的列表中没有与历史资源利用率相关的参数。另外,我做了研究,但没有找到任何相关信息。

关于kubernetes - Kubernetes 调度器依赖于哪些指标?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69135736/

37 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com