gpt4 book ai didi

multithreading - 当我在kubernetes中使用1000m以下的CPU请求时,同一容器中的多个线程可以同时使用多个内核并行运行吗?

转载 作者:行者123 更新时间:2023-12-02 12:25:00 25 4
gpt4 key购买 nike

当我用Google搜索时,有一些答案说在kubernetes中,100ms cpu表示您将使用1/10时间的一个cpu内核,而2300ms cpu意味着您将使用2个内核,而3/10的时间将使用一个cpu内核。另一个cpu核心。这是正确的吗?
我只是想知道,当在kubernetes中使用1000ms以下的CPU请求时,多个线程是否可以同时在多个内核上并行运行。

最佳答案

关于第一部分,确实可以使用一部分CPU资源来运行某些任务。
Kubernetes documentation - Managing Resources for Containers中,您可以找到可以指定最低资源要求的信息-用来运行pod和requestslimits,不能超过这些信息。
this article中有很好的描述

Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.


CPU请求/限制:

CPU resources are defined in millicores. If your container needs two full cores to run, you would put the value 2000m. If your container only needs ¼ of a core, you would put a value of 250m.One thing to keep in mind about CPU requests is that if you put in a value larger than the core count of your biggest node, your pod will never be scheduled.


关于第二部分,您可以并行使用多个线程。 很好的例子是 Kubernetes Job

A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).You can also use a Job to run multiple Pods in parallel.


特别是关于 Parallel execution for Jobs 的一部分
您还可以检查 Parallel Processing using Expansions以基于通用模板运行多个 Jobs您可以使用此方法并行处理批量工作。 在本文档中,您可以找到示例及其说明。

关于multithreading - 当我在kubernetes中使用1000m以下的CPU请求时,同一容器中的多个线程可以同时使用多个内核并行运行吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64420582/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com