- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个k8s集群v1.8。我试图在群集上设置prometheus,所以基本上是在尝试监视服务和部署。但是使用下面的配置图,我无法查看我的任何服务或部署。
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# If your node certificates are self-signed or use a different CA to the
# master CA, then disable certificate verification below. Note that
# certificate verification is an integral part of a secure infrastructure
# so this should only be disabled in a controlled environment. You can
# disable certificate verification by uncommenting the line below.
#
# insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Keep only the default/kubernetes service endpoints for the https port. This
# will add targets for each API server which Kubernetes adds an endpoint to
# the default/kubernetes service.
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
# Scrape config for nodes (kubelet).
#
# Rather than connecting directly to the node, the scrape is proxied though the
# Kubernetes apiserver. This means it will work if Prometheus is running out of
# cluster, or can't connect to nodes for some other reason (e.g. because of
# firewalling).
- job_name: 'kubernetes-nodes'
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# Scrape config for service endpoints.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/scrape`: Only scrape services that have a value of `true`
# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
# to set this to `https` & most likely set the `tls_config` of the scrape config.
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: If the metrics are exposed on a different port to the
# service then set this appropriately.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
# Example scrape config for probing services via the Blackbox Exporter.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
- job_name: 'kubernetes-services'
metrics_path: /probe
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
# Example scrape config for probing ingresses via the Blackbox Exporter.
#
# The relabeling allows the actual ingress scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
# Example scrape config for pods
#
# The relabeling allows the actual pod scrape endpoint to be configured via the
# following annotations:
#
# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the
# pod's declared ports (default is a port-free target if none are declared).
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
最佳答案
如您在上面粘贴的名为“kubernetes-pods”的配置作业的文档中所述:
prometheus.io/scrape
: Only scrape pods that have a value oftrue
prometheus.io/path
和
prometheus.io/port
等批注配置应用程序公开指标的路径和端口。
关于kubernetes - 通过Prometheus监视kubernetes服务或部署,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47990310/
我是普罗米修斯的新手。根据我到目前为止所阅读和尝试的内容,Prometheus 客户端库通过 HTTP 公开收集的指标,Prometheus 定期读取(抓取)。 在 Prometheus 抓取指标之前
我们正在寻求实现监控和警报解决方案,我们希望为每个功能单元提供自己的 prometheus 实例。 目前我们通过 prometheus-operator 使用单个 prometheus 实例运行它,但
在 Prometheus 中,有标签柯里化(Currying)。一些示例方法类似于 CurryWith()。 这是什么意思?抱歉,我没有找到这方面的任何文档。 问题可能与英语不是我的母语有关,我从函数
如何编写一个查询来输出过去 24 小时内实例的平均内存使用情况? 以下查询显示当前内存使用情况 100 * (1 - ((node_memory_MemFree + node_memory_Cache
我正在向 prometheus 发送与两个磁盘相关的数据。我想提醒一个磁盘的指标是否停止发送指标。假设我有 diskA 和 diskB,我正在收集 disk_up 指标。现在diskB失败了。在普罗米
我正在考虑将一些指标导出到 Prometheus,但我对我计划做的事情感到紧张。 我的系统由一个工作流引擎组成,我想跟踪工作流中每个步骤的一些指标。这似乎是合理的,有一个名为 wfengine_ste
我想根据 prometheus 值文件中的环境 qa/prod 设置环境特定的值 ## Additional alertmanager container environment variable
我有一个包含路径和状态代码的请求直方图...如果过去一小时内的错误比前一小时增加了 20%,我如何发出警报? 一个指标示例: {instance="someIp",instance_hostname=
我有一个包含路径和状态代码的请求直方图...如果过去一小时内的错误比前一小时增加了 20%,我如何发出警报? 一个指标示例: {instance="someIp",instance_hostname=
像这样的 Prometheus 规则文件: groups: - name: ./example.rules rules: - alert: ExampleAlert expr: vec
我们有多个在数据中心运行的 Prometheus 实例(我将它们称为 DC Prometheus 实例),以及一个额外的 Prometheus 实例(在下面的文本中我们将其称为“主”),我们在其中从
最近 prometheus-operator图表已弃用,图表已重命名 kube-prometheus-stack更清楚地反射(reflect)它安装了 kube-prometheus 项目堆栈,其中
我在 Amazon linux 2 实例上安装了 prometheus,这是我在用户数据中使用的配置: cat /etc/systemd/system/prometheus.service [Uni
我们正在使用 prometheus 运算符,我们现在想将数据存储在磁盘上,有一个博客对此进行了解释,但不确定来自查询的数字/大小响应 https://www.robustperception.io/h
目标 通过 grafana 和 prometheus 跟踪 RPM 和正常运行时间 情况 我们正在使用 django-prometheus -> To emit metrics fluent-bit
我有 Prometheus 从几台机器上的节点导出器中抓取指标,配置如下: scrape_configs: - job_name: node_exporter static_configs
我的 Prometheus 设置中有一个警报,它会在 someMetric > 100 时发送警报已对 5m 有效然后每隔 24h 重新发送警报根据下面的配置: prometheus-alert.ym
我有两个计数器。一个是测量累加器,另一个是测量计数。如何生成范围向量平均值? 我尝试了以下但得到的结果为空。 rate(my_events{type="sum"}[60s]) / rate(my_ev
因为 Prometheus 仅支持文本指标和许多 json 中的工具返回指标(如 Finatra、Spring Boot),所以我创建了一个简单的代理,将 json 转换为文本。因为我想将它用于多个源
Prometheus 是否可以计算指标具有特定值的持续时间(例如以秒为单位)? 一个简单的例子是 up可以有两个值的度量:1或 0指示系统是否正在运行。想象一下,自上周以来,系统多次上下波动。 我希望
我是一名优秀的程序员,十分优秀!