gpt4 book ai didi

elasticsearch - Logstash索引未反射(reflect)在Kibana中

转载 作者:行者123 更新时间:2023-12-03 02:19:27 28 4
gpt4 key购买 nike

我已经在 logstash.conf 的输出节点下创建了以下自定义索引 ...已经超过1小时,但 blend_test 仍未反射(reflect)在kibana索引服务器中(elk_server_ip:9200 / _cat / indices)

elasticsearch {
hosts => "elk_server_ip:9200"
manage_template => false
index => "blend_test*"
}
请建议是否做错了...。仅供引用,我还重新启动了文件信号和logstash
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/mahesh/Documents/refactor/nomi/unity/media/*.log

output.logstash:
enabled: true
hosts: ["localhost:5044"]
logstash.conf
input {
beats {
port => 5044
ssl => false
}
}

filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}] %{LOGLEVEL:loglevel}\|%{GREEDYDATA:module}\|%{GREEDYDATA:content}" }
}
date {
locale => "en"
match => [ "timestamp", "YYYY-MM-dd HH:mm:ss"]
target => "@timestamp"
timezone => "America/New_York"
}
}

output {
elasticsearch {
hosts => "elk_server_ip:9200"
manage_template => false
index => "blend_test*"
}
stdout { codec => rubydebug { metadata => true } }
}

最佳答案

AFAIK,您不能在用于 Elasticsearch 的输出插件的“索引”设置中使用通配符:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-index

index    
Value type is string
Default value is "logstash-%{+yyyy.MM.dd}"

The index to write events to. This can be dynamic using the %{foo} syntax. The default value will partition your indices by day so you can more easily delete old data or only search specific date ranges. Indexes may not contain uppercase characters. For weekly indexes ISO 8601 format is recommended, eg. logstash-%{+xxxx.ww}. LS uses Joda to format the index pattern from event timestamp. Joda formats are defined here.
如果需要“自定义”,可以使用一些字段:%{foo}语法

关于elasticsearch - Logstash索引未反射(reflect)在Kibana中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62771819/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com