gpt4 book ai didi

docker - 在Docker堆栈上部署具有JDBC管道的Logstash反复创建新容器

转载 作者:行者123 更新时间:2023-12-02 19:23:44 26 4
gpt4 key购买 nike

我一直在尝试自学如何在本地计算机上的Docker上部署ELK,以下问题已经出现了一个星期,而我却无法在线找到解决方案。

我在以下配置上运行“docker deploy -c docker-compose.yml elk_stack”。
我面临的问题是,在创建了logstash容器之后,日志显示正确选择了管道配置,并且数据流经了Elasticsearch容器。然后,一旦所有数据移动完毕,logstash容器将自行销毁,并创建一个新容器,该容器遵循与上一个相同的步骤。

为什么会这样呢?

以下是我的docker-compose.yml

version: "3"
networks:
elk_net:

services:
db:
image: mariadb:latest
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- 3306:3306
volumes:
- mysqldata:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
- mysqldata
adminer:
image: adminer
ports:
- "8080:8080"
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
environment:
discovery.type: single-node
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata01:/usr/share/elasticsearch/data
networks:
- elk_net
depends_on:
- elk_net
logstash:
image: logstash:custom
stdin_open: true
tty: true
volumes:
- ./dependency:/usr/local/dependency/
- ./logstash/pipeline/mysql:/usr/share/logstash/pipeline/
networks:
- elk_net
depends_on:
- elk_net
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
ports:
- 5601:5601
networks:
- elk_net
depends_on:
- elk_net

volumes:
esdata01:
driver: local
mysqldata:
driver: local

这是我的logstash conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://db:3306/sonar_data"
jdbc_user => "root"
jdbc_password => "root"
jdbc_driver_library => ""
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_paging_enabled => true
tracking_column => "accounting_entry_id"
tracking_column_type => "numeric"
use_column_value => true
statement => "SELECT * FROM call_detail_record WHERE accounting_entry_id > :sql_last_value ORDER BY accounting_entry_id ASC"
}
}

output {
stdout { codec => json_lines }
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "cdr_data"
}
}

docker 日志示例:
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$ sudo docker logs 2c89502d48b3 -f
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-09-17T08:06:56,317][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-09-17T08:06:56,339][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-09-17T08:06:56,968][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-09-17T08:06:57,002][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7a2b2d2a-157e-42c3-bcde-a14dc773750f", :path=>"/usr/share/logstash/data/uuid"}
[2019-09-17T08:06:57,795][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-09-17T08:06:59,033][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:06:59,316][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:06:59,391][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-09-17T08:06:59,393][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:06:59,720][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-09-17T08:06:59,725][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-09-17T08:07:01,244][INFO ][org.reflections.Reflections] Reflections took 59 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-17T08:07:01,818][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:01,842][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:01,860][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:01,868][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:01,930][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-09-17T08:07:02,138][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-17T08:07:02,328][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-17T08:07:02,332][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2228b784 run>"}
[2019-09-17T08:07:02,439][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-17T08:07:02,947][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-17T08:07:03,178][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-17T08:07:04,327][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_151a6660-4b00-4b2c-8a78-3d93f5161cbe", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-09-17T08:07:04,499][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:04,529][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:04,550][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:04,560][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:04,596][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-09-17T08:07:04,637][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x736c74cd run>"}
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
[2019-09-17T08:07:04,892][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:04,920][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-09-17T08:07:05,660][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-17T08:07:06,850][INFO ][logstash.inputs.jdbc ] (0.029802s) SELECT version()
[2019-09-17T08:07:07,038][INFO ][logstash.inputs.jdbc ] (0.007399s) SELECT version()
[2019-09-17T08:07:07,393][INFO ][logstash.inputs.jdbc ] (0.003612s) SELECT count(*) AS `count` FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 1
[2019-09-17T08:07:07,545][INFO ][logstash.inputs.jdbc ] (0.041288s) SELECT * FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 100000 OFFSET 0
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
....

[2019-09-17T08:07:13,148][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:13,633][INFO ][logstash.runner ] Logstash shut down.
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$

最佳答案

这是一个令人讨厌的问题,但是经过反复试验,我找到了答案。

我的问题是我没有在logstash管道配置中配置日程表CRON表达式。

将以下行添加到配置中就可以了。

schedule => "*/10 * * * *"

这篇文章帮助了我。
Logstash not reading in new entries from MySQL

关于docker - 在Docker堆栈上部署具有JDBC管道的Logstash反复创建新容器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57970764/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com