gpt4 book ai didi

kubernetes - 以声明方式(在Yaml中)自动缩放Google Cloud-Endpoints后端部署?

转载 作者:行者123 更新时间:2023-12-02 11:44:07 25 4
gpt4 key购买 nike

我已成功遵循文档herehere将API规范和GKE后端部署到Cloud Endpoints。

这给我留下了一个如下所示的deployment.yaml:

apiVersion: v1
kind: Service
metadata:
name: esp-myproject
spec:
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: esp-myproject
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-myproject
spec:
replicas: 1
template:
metadata:
labels:
app: esp-myproject
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=myproject1-0-0.endpoints.myproject.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 8081
- name: myproject
image: gcr.io/myproject/my-image:v0.0.1
ports:
- containerPort: 8080

这将在后端创建该应用程序的单个副本。到现在为止还挺好...

我现在想将yaml文件更新为 ,以声明方式指定自动缩放参数,以使到端点的流量合理时,应用程序的多个副本可以并排运行。

我已经读过(O'Reilly的书:Kubernetes Up&Running,GCP文档,K8s文档),但是我有两点困扰:
  • 我已经阅读了许多有关Horizo​​ntalPodAutoscaler的文章,但我不清楚是否部署必须使用它才能享受自动缩放的好处?
  • 如果是这样,我在文档中看到了一些示例,这些示例说明了如何在yaml中定义Horizo​​ntalPodAutoscaler的规范,如下所示-但是如何将其与现有的deployment.yaml结合起来?

  • Horizo​​ntalPodAutoscaler示例( from the docs):
    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: php-apache
    namespace: default
    spec:
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
    minReplicas: 1
    maxReplicas: 10
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 50

    预先感谢任何可以为我阐明这一点的人。

    最佳答案

    1. I've read a number of times about the HorizontalPodAutoscaler and it's not clear to me whether the deployment must make use of this in order to enjoy the benefits of autoscaling?


    不需要,但建议使用它,并且它已经内置。您可以构建自己的自动化系统,该系统可以按比例缩放,但是问题是为什么HPA已经支持它了。

    1. If so, I have seen examples in the docs of how to define the spec for the HorizontalPodAutoscaler in yaml as shown below - but how would I combine this with my existing deployment.yaml?


    它应该很简单。您基本上可以在HPA定义中引用您的部署:
    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: my-esp-project-hpa
    namespace: default
    spec:
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: esp-myproject <== here
    minReplicas: 1
    maxReplicas: 10
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 50

    关于kubernetes - 以声明方式(在Yaml中)自动缩放Google Cloud-Endpoints后端部署?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53619182/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com