- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
所以这里是大图:我的目标是使用 ELK 堆栈 + filebeat 索引大量 (.txt) 数据。
基本上,我的问题是filebeat 似乎无法将日志发送到logstash。我的猜测是,某些 docker 网络配置已关闭...
我的项目的代码位于 https://github.com/mhyousefi/elk-docker .
麋鹿容器
为此,我有一个 docker-compose.yml
从镜像运行容器 sebp/elk
,看起来像这样:
version: '2'
services:
elk:
container_name: elk
image: sebp/elk
ports:
- "5601:5601"
- "9200:9200"
- "5045:5044"
volumes:
- /path/to/volumed-folder:/logstash
networks:
- elk_net
networks:
elk_net:
driver: bridge
/opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf
$ /opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-14T11:51:11,693][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-14T11:51:11,701][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-14T11:51:12,194][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-14T11:51:12,410][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"3646b6e4-d540-4c9c-a38d-2769aef5a05e", :path=>"/tmp/logstash/data/uuid"}
[2018-08-14T11:51:13,089][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-14T11:51:15,554][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-14T11:51:16,088][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-14T11:51:16,101][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-14T11:51:16,291][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-14T11:51:16,391][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-14T11:51:16,398][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-14T11:51:16,460][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-14T11:51:16,515][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-14T11:51:16,559][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-14T11:51:16,688][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-08-14T11:51:16,899][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5045"}
[2018-08-14T11:51:16,925][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x54ab986e run>"}
[2018-08-14T11:51:17,170][INFO ][org.logstash.beats.Server] Starting server on port: 5045
[2018-08-14T11:51:17,187][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-14T11:51:17,637][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
filebeat-config.conf
好像:
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[@metadata][beat]}"
}
}
filebeat
使用以下命令创建容器
docker-compose.yml
文件:
version: "2"
services:
filebeat:
container_name: filebeat
hostname: filebeat
image: docker.elastic.co/beats/filebeat:6.3.0
user: root
# command: ./filebeat -c /usr/share/filebeat-volume/config/filebeat.yml -E name=mybeat
volumes:
# "volumed-folder" lies under ${PROJECT_DIR}/filebeat or could be anywhere else you wish
- /path/to/volumed-folder:/usr/share/filebeat/filebeat-volume:ro
networks:
- filebeat_net
networks:
filebeat_net:
external: true
filebeat.yml
下
/usr/share/filebeat
用我已经音量过的那个,然后运行命令:
./filebeat -e -c ./filebeat.yml -E name="mybeat"
root@filebeat filebeat]# ./filebeat -e -c ./filebeat.yml -E name="mybeat"
2018-08-14T12:13:16.325Z INFO instance/beat.go:492 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018-08-14T12:13:16.325Z INFO instance/beat.go:499 Beat UUID: 3b4b3897-ef77-43ad-b982-89e8f690a96e
2018-08-14T12:13:16.325Z INFO [beat] instance/beat.go:716 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "3b4b3897-ef77-43ad-b982-89e8f690a96e"}}}
2018-08-14T12:13:16.325Z INFO [beat] instance/beat.go:725 Build info {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38", "libbeat": "6.3.0", "time": "2018-06-11T22:34:44.000Z", "version": "6.3.0"}}}
2018-08-14T12:13:16.325Z INFO [beat] instance/beat.go:728 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":6,"version":"go1.9.4"}}}
2018-08-14T12:13:16.327Z INFO [beat] instance/beat.go:732 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-08-04T17:34:15Z","containerized":true,"hostname":"filebeat","ips":["127.0.0.1/8","172.28.0.2/16"],"kernel_version":"4.4.0-116-generic","mac_addresses":["02:42:ac:1c:00:02"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2018-08-14T12:13:16.328Z INFO [beat] instance/beat.go:761 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 93, "ppid": 28, "seccomp": {"mode":"filter"}, "start_time": "2018-08-14T12:13:15.530Z"}}}
2018-08-14T12:13:16.328Z INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.0
2018-08-14T12:13:16.329Z INFO pipeline/module.go:81 Beat name: mybeat
2018-08-14T12:13:16.329Z WARN [cfgwarn] beater/filebeat.go:61 DEPRECATED: prospectors are deprecated, Use `inputs` instead. Will be removed in version: 7.0.0
2018-08-14T12:13:16.330Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-08-14T12:13:16.330Z INFO instance/beat.go:315 filebeat start running.
2018-08-14T12:13:16.330Z INFO registrar/registrar.go:112 Loading registrar data from /usr/share/filebeat/data/registry
2018-08-14T12:13:16.330Z INFO registrar/registrar.go:123 States Loaded from registrar: 0
2018-08-14T12:13:16.331Z WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-14T12:13:16.331Z INFO crawler/crawler.go:48 Loading Inputs: 1
2018-08-14T12:13:16.331Z INFO log/input.go:111 Configured paths: [/usr/share/filebeat-volume/data/Shakespeare.txt]
2018-08-14T12:13:16.331Z INFO input/input.go:87 Starting input of type: log; ID: 1899165251698784346
2018-08-14T12:13:16.331Z INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-08-14T12:13:46.334Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}
filebeat.yml
好像:
filebeat.inputs:
- type: log
paths:
- /path/to/a/log/file
output.logstash:
hosts: ["elk:5044"]
setup.kibana:
host: "localhost:5601"
networks
我的
docker-compose
部分文件,以便我的容器可以使用它们的
container_name
相互通信s。
output.logstash:
hosts: ["elk:5044"]
docker-compose up elk
:
elk |
elk | ==> /var/log/elasticsearch/elasticsearch.log <==
elk | [2018-08-14T11:51:16,974][INFO ][o.e.c.m.MetaDataIndexTemplateService] [fZr_LDR] adding template [logstash] for index patterns [logstash-*]
ping elk
在我的 filebeat 容器中。主机名未解析。
ELK
打开一个端口容器。发生的事情是
Logstash
正在监听容器内的端口 5044。只要
filebeat.yml
Filebeat
内的文件容器可以解析
ELK
主机,然后将其日志发送到那里的 5044 端口(“elk:5044”),它们应该都可以正常工作。
"5045:5044"
行,并修复了
networks
docker-compose.yml
里面的部分我的文件
Filebeat
容器包括以下内容:
networks:
filebeat_net:
external:
name: elk_elk_net
ping elk
,我正在建立连接。
Logstash
之间的连接和 Filebeat
仍然很麻烦,并且每 30 秒不断收到以下消息。
2018-08-14T12:13:46.334Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}
2018-08-15T16:26:41.986Z DEBUG [input] input/input.go:124 Run input
2018-08-15T16:26:41.986Z DEBUG [input] log/input.go:147 Start next scan
2018-08-15T16:26:41.986Z DEBUG [input] log/input.go:168 input states cleaned up. Before: 0, After: 0, Pending: 0
最佳答案
我终于能够解决我的问题。首先,解决了容器连接问题,如 中所述。更新(2018 年 8 月 15 日)我的问题的一部分。Filebeat
的问题不将日志发送到 Logstash
是因为我没有明确指定要启用的输入/输出配置(这对我来说是一个令人沮丧的事实,因为文档中没有明确提到它)。所以,改变我的 filebeat.yml
文件以下固定没有诀窍。
filebeat.inputs:
- type: log
enabled: true
paths:
- ${PWD}/filebeat-volume/data/*.txt
output.logstash:
enabled: true
hosts: ["elk:5044"]
index: "your cusotm index"
setup.kibana:
host: "elk:5601"
关于docker - Filebeat 不会将日志发送到 logstash,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51849542/
我正在使用以下dockerfile: FROM ubuntu:14.04 MAINTAINER xxx xxx # SSH RUN apt-get update && apt-get install
我运行了docker-compose build celery,(经过数小时的尝试,我的连接不良)成功了。 app Dockerfile的前80%是相同的,但不会重复使用缓存。从我可以浏览的内容来看,
我可以使用以下命令成功创建 Docker 注册表 v2 服务:docker service create 然后我使用 docker Push 将一些图像推送到该服务。 当我通过 curl localh
我正在尝试使用 gitlab 构建 CI,我从 docker 的 docker 镜像开始,我的前端存储库没有任何问题,但现在使用相同的 gitlab-ci 配置文件,我有此守护程序错误。 这是构建的输
用例: 我们在 Jenkins 中有几个“发布作业”build 和 push 应用程序的 Docker 镜像到 docker registry,更新各种文件中的项目版本,最后将发布标签推送到相应的 G
当我尝试构建我的 docker 文件时,docker 返回以下错误: [+] Building 0.0s (1/2)
docker-in-docker 的作者在此博客中建议不要将此图像用于 CI 目的: jpetazzo/Using Docker-in-Docker for your CI or testing en
我创建了一个 Dockerfile 来在 Docker 中运行 Docker: FROM ubuntu:16.04 RUN apt-get update && \ apt-get in
我尝试为 Docker 镜像定位一个特定标签。我怎样才能在命令行上做到这一点?我想避免下载所有图像,然后删除不需要的图像。 在 Ubuntu 官方版本中,https://registry.hub.do
我正在尝试在docker中运行docker。唯一的目的是实验性的,我绝不尝试实现任何功能,我只想检查docker从另一个docker运行时的性能。 我通过Mac上的boot2docker启动docke
docker-compose.yml version: "3" services: daggr: image: "docker.pvt.com/test/daggr:stable"
我有一个非常具体的开发环境用例。在一些代码中,我启动了一个容器来抓取页面并检索在容器中运行的服务(Gitlab)的 token 。 现在,我希望 Dockerize 运行它的代码。具体来说,类似: o
之前已经问过这个问题,但我不确定当时是否可以使用docker-compose文件完成docker堆栈部署。 由于最新版本支持使用compose将服务部署到堆栈,因此,我无法理解dab文件的值。 我检查
我在一次采访中被问到这个问题,但无法回答。也没有找到任何相关信息。 最佳答案 正如 Docker 文档中所述,Docker 注册表是: [...] a hosted service containin
有没有一种方法可以将具有给定扩展名的所有文件复制到Docker中的主机?就像是 docker cp container_name:path/to/file/in/docker/*.png path/o
我的日志驱动程序设置为journald。使用日志记录驱动程序时,daemon.json文件中的日志级别配置会影响日志吗?使用docker logs 时仅会影响容器日志? 例如,docker和journ
我最近开始使用Docker + Celery。我还共享了full sample codes for this example on github,以下是其中的一些代码段,以帮助解释我的观点。 就上下文
运行docker build .命令后,尝试提交构建的镜像,但收到以下错误 Step 12 : CMD activator run ---> Using cache ---> efc82ff1ca
我们有docker-compose.yml,其中包含Kafka,zookeeper和schema registry的配置 当我们启动docker compose时,出现以下错误 docker-comp
我是Docker的新手。是否可以在Docker Hub外部建立Docker基本镜像存储库?假设将它们存储在您的云中,而不是拥有DH帐户?谢谢。 最佳答案 您可以根据需要托管自己的注册表。可以在Depl
我是一名优秀的程序员,十分优秀!