gpt4 book ai didi

Kubernetes:kube-scheduler 未正确为 pod 分配节点评分

转载 作者:行者123 更新时间:2023-12-02 11:39:15 25 4
gpt4 key购买 nike

我正在使用 Rancher 运行 Kubernetes,我发现 kube-scheduler 出现了奇怪的行为。添加第三个节点后,我希望看到 pod 开始被调度并分配给它。然而,kube-scheduler 对这个新的第三个节点 node3 的得分最低,即使它几乎没有在其中运行的 pod,我希望它获得最高分。

以下是 Kube-scheduler 的日志:

scheduling_queue.go:815] About to try and schedule pod namespace1/pod1
scheduler.go:456] Attempting to schedule pod: namespace1/pod1
predicates.go:824] Schedule Pod namespace1/pod1 on Node node1 is allowed, Node is running only 94 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node1 is allowed, existing pods anti-affinity terms satisfied.
predicates.go:824] Schedule Pod namespace1/pod1 on Node node3 is allowed, Node is running only 4 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node3 is allowed, existing pods anti-affinity terms satisfied.
predicates.go:824] Schedule Pod namespace1/pod1 on Node node2 is allowed, Node is running only 95 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node2 is allowed, existing pods anti-affinity terms satisfied.
resource_allocation.go:78] pod1 -> node1: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 7
resource_allocation.go:78] pod1 -> node1: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 3
resource_allocation.go:78] pod1 -> node3: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9
resource_allocation.go:78] pod1 -> node3: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9
resource_allocation.go:78] pod1 -> node2: BalancedResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 7
resource_allocation.go:78] pod1 -> node2: LeastResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 3
generic_scheduler.go:748] pod1_namespace1 -> node1: TaintTolerationPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node3: TaintTolerationPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node2: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] pod1 -> node1: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] pod1 -> node3: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] pod1 -> node2: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node1: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node3: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node2: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node1: NodeAffinityPriority, Score: (0)
generic_scheduler.go:748] pod1_namespace1 -> node3: NodeAffinityPriority, Score: (0)
generic_scheduler.go:748] pod1_namespace1 -> node2: NodeAffinityPriority, Score: (0)
interpod_affinity.go:232] pod1 -> node1: InterPodAffinityPriority, Score: (0)
interpod_affinity.go:232] pod1 -> node3: InterPodAffinityPriority, Score: (0)
interpod_affinity.go:232] pod1 -> node2: InterPodAffinityPriority, Score: (10)
generic_scheduler.go:803] Host node1 => Score 100040
generic_scheduler.go:803] Host node3 => Score 100038
generic_scheduler.go:803] Host node2 => Score 100050
scheduler_binder.go:256] AssumePodVolumes for pod "namespace1/pod1", node "node2"
scheduler_binder.go:266] AssumePodVolumes for pod "namespace1/pod1", node "node2": all PVCs bound and nothing to do
factory.go:727] Attempting to bind pod1 to node2

最佳答案

我可以从日志中看出您的 pod 将始终安排在 node2 上。因为看起来你有某种 PodAffinity额外得分 10 .前往 50 .

有点奇怪的是我正在得分 48对于 node3,但它似乎是 -10被困在某处(总计 38 )。也许是因为亲和力,或者一些条目没有显示在日志中,或者仅仅是调度程序进行计算的方式中的一个错误。您可能需要深入了解 kube-scheduler code如果您想了解更多信息。

这就是我所拥有的:

node1 7 + 3 + 10 + 10 + 10 = 40
node2 7 + 3 + 10 + 10 + 10 + 10 = 50
node3 9 + 9 + 10 + 10 + 10 = 48

关于Kubernetes:kube-scheduler 未正确为 pod 分配节点评分,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58885793/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com