- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我一直在尝试自学如何在本地计算机上的Docker上部署ELK,以下问题已经出现了一个星期,而我却无法在线找到解决方案。
我在以下配置上运行“docker deploy -c docker-compose.yml elk_stack”。
我面临的问题是,在创建了logstash容器之后,日志显示正确选择了管道配置,并且数据流经了Elasticsearch容器。然后,一旦所有数据移动完毕,logstash容器将自行销毁,并创建一个新容器,该容器遵循与上一个相同的步骤。
为什么会这样呢?
以下是我的docker-compose.yml
version: "3"
networks:
elk_net:
services:
db:
image: mariadb:latest
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- 3306:3306
volumes:
- mysqldata:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
- mysqldata
adminer:
image: adminer
ports:
- "8080:8080"
deploy:
placement:
constraints: [node.role == manager]
networks:
- elk_net
depends_on:
- elk_net
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
environment:
discovery.type: single-node
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata01:/usr/share/elasticsearch/data
networks:
- elk_net
depends_on:
- elk_net
logstash:
image: logstash:custom
stdin_open: true
tty: true
volumes:
- ./dependency:/usr/local/dependency/
- ./logstash/pipeline/mysql:/usr/share/logstash/pipeline/
networks:
- elk_net
depends_on:
- elk_net
kibana:
image: docker.elastic.co/kibana/kibana:7.3.1
ports:
- 5601:5601
networks:
- elk_net
depends_on:
- elk_net
volumes:
esdata01:
driver: local
mysqldata:
driver: local
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://db:3306/sonar_data"
jdbc_user => "root"
jdbc_password => "root"
jdbc_driver_library => ""
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_paging_enabled => true
tracking_column => "accounting_entry_id"
tracking_column_type => "numeric"
use_column_value => true
statement => "SELECT * FROM call_detail_record WHERE accounting_entry_id > :sql_last_value ORDER BY accounting_entry_id ASC"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "cdr_data"
}
}
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$ sudo docker logs 2c89502d48b3 -f
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-09-17T08:06:56,317][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-09-17T08:06:56,339][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-09-17T08:06:56,968][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-09-17T08:06:57,002][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7a2b2d2a-157e-42c3-bcde-a14dc773750f", :path=>"/usr/share/logstash/data/uuid"}
[2019-09-17T08:06:57,795][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-09-17T08:06:59,033][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:06:59,316][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:06:59,391][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-09-17T08:06:59,393][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:06:59,720][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-09-17T08:06:59,725][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-09-17T08:07:01,244][INFO ][org.reflections.Reflections] Reflections took 59 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-17T08:07:01,818][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:01,842][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:01,860][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:01,868][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:01,930][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-09-17T08:07:02,138][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-17T08:07:02,328][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-17T08:07:02,332][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2228b784 run>"}
[2019-09-17T08:07:02,439][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-17T08:07:02,947][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-17T08:07:03,178][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-17T08:07:04,327][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_151a6660-4b00-4b2c-8a78-3d93f5161cbe", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-09-17T08:07:04,499][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:04,529][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:04,550][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:04,560][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:04,596][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-09-17T08:07:04,637][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x736c74cd run>"}
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
[2019-09-17T08:07:04,892][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:04,920][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-09-17T08:07:05,660][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-17T08:07:06,850][INFO ][logstash.inputs.jdbc ] (0.029802s) SELECT version()
[2019-09-17T08:07:07,038][INFO ][logstash.inputs.jdbc ] (0.007399s) SELECT version()
[2019-09-17T08:07:07,393][INFO ][logstash.inputs.jdbc ] (0.003612s) SELECT count(*) AS `count` FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 1
[2019-09-17T08:07:07,545][INFO ][logstash.inputs.jdbc ] (0.041288s) SELECT * FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 100000 OFFSET 0
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
....
[2019-09-17T08:07:13,148][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:13,633][INFO ][logstash.runner ] Logstash shut down.
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$
最佳答案
这是一个令人讨厌的问题,但是经过反复试验,我找到了答案。
我的问题是我没有在logstash管道配置中配置日程表CRON表达式。
将以下行添加到配置中就可以了。
schedule => "*/10 * * * *"
关于docker - 在Docker堆栈上部署具有JDBC管道的Logstash反复创建新容器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57970764/
这是代码片段。 请说出这种用小内存存储大数据的算法是什么。 public static void main(String[] args) { long longValue = 21474836
所以我使用 imap 从 gmail 和 outlook 接收电子邮件。 Gmail 像这样编码 =?UTF-8?B?UmU6IM69zq3OvyDOtc68zrHOuc67IG5ldyBlbWFpb
很久以前就学会了 C 代码;想用 Scheme 尝试一些新的和不同的东西。我正在尝试制作一个接受两个参数并返回两者中较大者的过程,例如 (define (larger x y) (if (> x
Azure 恢复服务保管库有两个备份配置选项 - LRS 与 GRS 这是一个有关 Azure 恢复服务保管库的问题。 当其驻留区域发生故障时,如何处理启用异地冗余的恢复服务保管库?如果未为恢复服务启
说,我有以下实体: @Entity public class A { @Id @GeneratedValue private Long id; @Embedded private
我有下一个问题。 我有下一个标准: criteria.add(Restrictions.in("entity.otherEntity", getOtherEntitiesList())); 如果我的
如果这是任何类型的重复,我会提前申请,但我找不到任何可以解决我的具体问题的内容。 这是我的程序: import java.util.Random; public class CarnivalGame{
我目前正在使用golang创建一个聚合管道,在其中使用“$ or”运算符查询文档。 结果是一堆需要分组的未分组文档,这样我就可以进入下一阶段,找到两个数据集之间的交集。 然后将其用于在单独的集合中进行
是否可以在正则表达式中创建 OR 条件。 我正在尝试查找包含此类模式的文件名列表的匹配项 第一个案例 xxxxx-hello.file 或者案例二 xxxx-hello-unasigned.file
该程序只是在用户输入行数时创建菱形的形状,因此它有 6 个 for 循环; 3 个循环创建第一个三角形,3 个循环创建另一个三角形,通过这 2 个三角形和 6 个循环,我们得到了一个菱形,这是整个程序
我有一个像这样的查询字符串 www.google.com?Department=Education & Finance&Department=Health 我有这些 li 标签,它们的查询字符串是这样
我有一个带有静态构造函数的类,我用它来读取 app.config 值。如何使用不同的配置值对类进行单元测试。我正在考虑在不同的应用程序域中运行每个测试,这样我就可以为每个测试执行静态构造函数 - 但我
我正在寻找一个可以容纳多个键的容器,如果我为其中一个键值输入保留值(例如 0),它会被视为“或”搜索。 map, int > myContainer; myContainer.insert(make_
我正在为 Web 应用程序创建数据库,并正在寻找一些建议来对可能具有多种类型的单个实体进行建模,每种类型具有不同的属性。 作为示例,假设我想为“数据源”对象创建一个关系模型。所有数据源都会有一些共享属
(1) =>CREATE TABLE T1(id BIGSERIAL PRIMARY KEY, name TEXT); CREATE TABLE (2) =>INSERT INTO T1 (name)
我不确定在使用别名时如何解决不明确的列引用。 假设有两个表,a 和 b,它们都有一个 name 列。如果我加入这两个表并为结果添加别名,我不知道如何为这两个表引用 name 列。我已经尝试了一些变体,
我的查询是: select * from table where id IN (1,5,4,3,2) 我想要的与这个顺序完全相同,不是从1...5,而是从1,5,4,3,2。我怎样才能做到这一点? 最
我正在使用 C# 代码执行动态生成的 MySQL 查询。抛出异常: CREATE TABLE dump ("@employee_OID" VARCHAR(50)); "{"You have an er
我有日期 2016-03-30T23:59:59.000000+0000。我可以知道它的格式是什么吗?因为如果我使用 yyyy-MM-dd'T'HH:mm:ss.SSS,它会抛出异常 最佳答案 Sim
我有一个示例模式,它的 SQL Fiddle 如下: http://sqlfiddle.com/#!2/6816b/2 这个 fiddle 只是根据 where 子句中的条件查询示例数据库,如下所示:
我是一名优秀的程序员,十分优秀!