gpt4 book ai didi

monitoring - 警报在 Prometheus 上触发,但在 Alertmanager 上不触发

转载 作者:行者123 更新时间:2023-12-05 06:59:39 25 4
gpt4 key购买 nike

我似乎无法找出为什么 Alertmanager 没有收到来自 Prometheus 的警报。如果能迅速协助应对这一挑战,我将不胜感激。我对使用 Prometheus 和 Alertmanager 还很陌生。我正在使用 MsTeams 的 webhook 来推送来自 alertmanager 的通知。

Alertmanager.yml

global:
resolve_timeout: 5m


route:
group_by: ['critical','severity']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'alert_channel'


receivers:
- name: 'alert_channel'
webhook_configs:
- url: 'http://localhost:2000/alert_channel'
send_resolved: true

prometheus.yml - (只是其中的一部分)

# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- localhost:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
- alert_rules.yml

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'kafka'

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'

static_configs:
- targets: ['localhost:8080']
labels:
service: 'Kafka'

alertmanager.service

[Unit]
Description=Prometheus Alert Manager
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=alertmanager
Group=alertmanager
ExecStart=/usr/local/bin/alertmanager \
--config.file=/etc/alertmanager/alertmanager.yml \
--storage.path=/data/alertmanager \
--web.listen-address=127.0.0.1:9093

Restart=always

[Install]
WantedBy=multi-user.target

alert_rules enter image description here

groups:
- name: alert_rules
rules:
- alert: ServiceDown
expr: up == 0
for: 1m
labels:
severity: "critical"
annotations:
summary: "Service {{ $labels.service }} down!"
description: "{{ $labels.service }} of job {{ $labels.job }} has been down for more than 1 minute."


- alert: HostOutOfMemory
expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 25
for: 5m
labels:
severity: warning
annotations:
summary: "Host out of memory (instance {{ $labels.instance }})"
description: "Node memory is filling up (< 25% left)\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"


- alert: HostOutOfDiskSpace
expr: (node_filesystem_avail_bytes{mountpoint="/"} * 100) / node_filesystem_size_bytes{mountpoint="/"} < 40
for: 1s
labels:
severity: warning
annotations:
summary: "Host out of disk space (instance {{ $labels.instance }})"
description: "Disk is almost full (< 40% left)\n VALUE = {{ $value }}\n LABELS: {{ $labels }}"

普罗米修斯警报 enter image description here

但我在 alertmanager 上看不到这些警报 enter image description here

我现在没主意了。我需要帮助。我从上周开始就在研究这个问题。

最佳答案

您的 Alertmanager 配置有误。 group_by 需要一个 label names 的集合,据我所知,critical 是一个标签值,而不是名称。所以只需删除 critical 就可以了。

另请查看此博客文章,很有帮助https://www.robustperception.io/whats-the-difference-between-group_interval-group_wait-and-repeat_interval


编辑 1

如果您希望接收器 alert_channel 仅接收严重性为 critical 的警报,您必须创建一个路由并使用 match 属性.

沿着这些线的东西:

route:
group_by: ['...'] # good if very low volum
group_wait: 15s
group_interval: 5m
repeat_interval: 1h
routes:
- match:
- severity: critical
receiver: alert_channel

编辑2

如果这不起作用,试试这个:

route:
group_by: ['...']
group_wait: 15s
group_interval: 5m
repeat_interval: 1h
receiver: alert_channel

这应该有效。检查您的 Prometheus 日志,看看您是否在那里找到提示

关于monitoring - 警报在 Prometheus 上触发,但在 Alertmanager 上不触发,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64352020/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com