gpt4 book ai didi

kubernetes - 如何使用Istio的Prometheus配置kubernetes hpa?

转载 作者:行者123 更新时间:2023-12-02 11:35:44 26 4
gpt4 key购买 nike

我们有一个Istio集群,我们正在尝试为Kubernetes配置水平容器自动缩放。我们希望将请求计数用作hpa的自定义指标。我们如何才能将Istio的Prometheus用于同一目的?

最佳答案

事实证明,这个问题比我预期的要复杂得多,但最后我得到了答案。

  • 首先,您需要配置您的应用程序以提供自定义指标。它在开发应用程序方面。这是一个使用Go语言制作示例的示例:Watching Metrics With Prometheus
  • 其次,您需要定义应用程序部署(或Pod或所需的任何部署)并将其部署到Kubernetes,例如:
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: podinfo
    spec:
    replicas: 2
    template:
    metadata:
    labels:
    app: podinfo
    annotations:
    prometheus.io/scrape: 'true'
    spec:
    containers:
    - name: podinfod
    image: stefanprodan/podinfo:0.0.1
    imagePullPolicy: Always
    command:
    - ./podinfo
    - -port=9898
    - -logtostderr=true
    - -v=2
    volumeMounts:
    - name: metadata
    mountPath: /etc/podinfod/metadata
    readOnly: true
    ports:
    - containerPort: 9898
    protocol: TCP
    readinessProbe:
    httpGet:
    path: /readyz
    port: 9898
    initialDelaySeconds: 1
    periodSeconds: 2
    failureThreshold: 1
    livenessProbe:
    httpGet:
    path: /healthz
    port: 9898
    initialDelaySeconds: 1
    periodSeconds: 3
    failureThreshold: 2
    resources:
    requests:
    memory: "32Mi"
    cpu: "1m"
    limits:
    memory: "256Mi"
    cpu: "100m"
    volumes:
    - name: metadata
    downwardAPI:
    items:
    - path: "labels"
    fieldRef:
    fieldPath: metadata.labels
    - path: "annotations"
    fieldRef:
    fieldPath: metadata.annotations
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: podinfo
    labels:
    app: podinfo
    spec:
    type: NodePort
    ports:
    - port: 9898
    targetPort: 9898
    nodePort: 31198
    protocol: TCP
    selector:
    app: podinfo

    注意字段annotations: prometheus.io/scrape: 'true'。需要请求Prometheus从资源中读取指标。还要注意,还有两个附加注释,它们具有默认值。但是如果您在应用程序中更改了它们,则需要为它们添加正确的值:
  • prometheus.io/path:如果度量标准路径不是/ metrics,则使用此注释对其进行定义。
  • prometheus.io/port:在指定的端口上刮除Pod,而不是Pod的声明端口(如果未声明,则默认为无端口目标)。
  • 接下来,Istio中的Prometheus使用自己修改的Istio目的配置,默认情况下会跳过Pod中的自定义指标。因此,您需要对其进行一些修改。
    就我而言,我从this example中获取了Pod指标的配置,并仅针对Pod修改了Istio的Prometheus配置:
    kubectl edit configmap -n istio-system prometheus

    我根据前面提到的示例更改了标签的顺序:
    # pod's declared ports (default is a port-free target if none are declared).
    - job_name: 'kubernetes-pods'
    # if you want to use metrics on jobs, set the below field to
    # true to prevent Prometheus from setting the `job` label
    # automatically.
    honor_labels: false
    kubernetes_sd_configs:
    - role: pod
    # skip verification so you can do HTTPS to pods
    tls_config:
    insecure_skip_verify: true
    # make sure your labels are in order
    relabel_configs:
    # these labels tell Prometheus to automatically attach source
    # pod and namespace information to each collected sample, so
    # that they'll be exposed in the custom metrics API automatically.
    - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: namespace
    - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: pod
    # these labels tell Prometheus to look for
    # prometheus.io/{scrape,path,port} annotations to configure
    # how to scrape
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__

    此后,自定义指标出现在Prometheus中。但是,小心更改Prometheus配置,因为Istio所需的某些指标可能会消失,请仔细检查所有内容。
  • 现在是时候安装Prometheus custom metric adapter了。
  • 下载this存储库
  • 在文件<repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml中更改Prometheus服务器的地址。例如,- --prometheus-url=http://prometheus.istio-system:9090/
  • 运行命令kubectl apply -f <repository-directory>/deploy/manifests一段时间后,custom.metrics.k8s.io/v1beta1应该出现在命令“kubectl api-vesions”的输出中。

  • 另外,使用 kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .命令检查自定义API的输出
    下一个示例的输出应类似于以下示例:
    {
    "kind": "MetricValueList",
    "apiVersion": "custom.metrics.k8s.io/v1beta1",
    "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
    },
    "items": [
    {
    "describedObject": {
    "kind": "Pod",
    "namespace": "default",
    "name": "podinfo-6b86c8ccc9-kv5g9",
    "apiVersion": "/__internal"
    },
    "metricName": "http_requests",
    "timestamp": "2018-01-10T16:49:07Z",
    "value": "901m" },
    {
    "describedObject": {
    "kind": "Pod",
    "namespace": "default",
    "name": "podinfo-6b86c8ccc9-nm7bl",
    "apiVersion": "/__internal"
    },
    "metricName": "http_requests",
    "timestamp": "2018-01-10T16:49:07Z",
    "value": "898m"
    }
    ]
    }

    如果是这样,则可以转到下一步。如果不是,请在CustomMetrics kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"和http_requests kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"中查看可用于Pod的API。 MetricNames是根据Prometheus从Pods收集的度量标准生成的,如果它们为空,则需要朝该方向查看。
  • 最后一步是配置HPA并对其进行测试。因此,就我而言,我为podinfo应用程序创建了HPA,该应用程序之前已定义:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    metadata:
    name: podinfo
    spec:
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
    minReplicas: 2
    maxReplicas: 10
    metrics:
    - type: Pods
    pods:
    metricName: http_requests
    targetAverageValue: 10

    并使用简单的Go应用程序来测试负载:
    #install hey
    go get -u github.com/rakyll/hey
    #do 10K requests rate limited at 25 QPS
    hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz

    一段时间后,我看到通过使用kubectl describe hpakubectl get hpa
  • 命令缩放的变化

    我使用了有关从 Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus文章创建自定义指标的说明

    所有有用的链接都放在一个地方:
  • Watching Metrics With Prometheus-向应用程序中添加指标的示例
  • k8s-prom-hpa-为Prometheus创建自定义指标的示例(与上面的文章相同)
  • Kubernetes Custom Metrics Adapter for Prometheus
  • Setting up the custom metrics adapter and sample app
  • 关于kubernetes - 如何使用Istio的Prometheus配置kubernetes hpa?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51840970/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com