gpt4 book ai didi

docker - docker-compose基本的Prometheus/Grafana示例与一个节点导出器

转载 作者:行者123 更新时间:2023-12-02 19:36:22 25 4
gpt4 key购买 nike

问题:如何配置Prometheus服务器以从节点导出器提取数据?

我已经在Grafana上成功设置了数据源,并使用以下docker-compose.yml查看默认的仪表板。这三项服务是:

  • Prometheus服务器
  • 节点导出器
  • Grafana

  • Dockerfile :
    version: '2'

    services:

    prometheus_srv:
    image: prom/prometheus
    container_name: prometheus_server
    hostname: prometheus_server


    prometheus_node:
    image: prom/node-exporter
    container_name: prom_node_exporter
    hostname: prom_node_exporter
    depends_on:
    - prometheus_srv

    grafana:
    image: grafana/grafana
    container_name: grafana_server
    hostname: grafana_server
    depends_on:
    - prometheus_srv

    enter image description here

    编辑:

    我使用了类似于 @Daniel Lee共享的东西,它似乎可以工作:
    # my global config
    global:
    scrape_interval: 10s # By default, scrape targets every 15 seconds.
    evaluation_interval: 10s # By default, scrape targets every 15 seconds.

    scrape_configs:
    # Scrape Prometheus itself
    - job_name: 'prometheus'
    scrape_interval: 10s
    scrape_timeout: 10s
    static_configs:
    - targets: ['localhost:9090']

    # Scrape the Node Exporter
    - job_name: 'node'
    scrape_interval: 10s
    static_configs:
    - targets: ['prom_node_exporter:9100']

    最佳答案

    YAML configuration file中,这是Grafana test instance of Prometheus的示例。

    docker 文件:

    FROM prom/prometheus
    ADD prometheus.yml /etc/prometheus/

    YAML文件:
    # my global config
    global:
    scrape_interval: 10s # By default, scrape targets every 15 seconds.
    evaluation_interval: 10s # By default, scrape targets every 15 seconds.
    # scrape_timeout is set to the global default (10s).

    # Load and evaluate rules in this file every 'evaluation_interval' seconds.
    rule_files:
    # - "first.rules"
    # - "second.rules"

    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
    # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
    - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 10s
    scrape_timeout: 10s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    #- targets: ['localhost:9090', '172.17.0.1:9091', '172.17.0.1:9100', '172.17.0.1:9150']
    - targets: ['localhost:9090', '127.0.0.1:9091', '127.0.0.1:9100', '127.0.0.1:9150']

    关于docker - docker-compose基本的Prometheus/Grafana示例与一个节点导出器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44652446/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com