gpt4 book ai didi

elasticsearch - 使用或不使用Logstash过滤Filebeat输入

转载 作者:行者123 更新时间:2023-12-03 01:18:55 27 4
gpt4 key购买 nike

在当前设置中,我们使用Filebeat将日志发送到Elasticsearch实例。应用程序日志为JSON格式,并且在AWS中运行。

由于某种原因,AWS决定在新平台版本中为日志行添加前缀,现在日志解析不起作用。

Apr 17 06:33:32 ip-172-31-35-113 web: {"@timestamp":"2020-04-17T06:33:32.691Z","@version":"1","message":"Tomcat started on port(s): 5000 (http) with context path ''","logger_name":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer","thread_name":"main","level":"INFO","level_value":20000}

在此之前:
{"@timestamp":"2020-04-17T06:33:32.691Z","@version":"1","message":"Tomcat started on port(s): 5000 (http) with context path ''","logger_name":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer","thread_name":"main","level":"INFO","level_value":20000}

问题是我们是否可以避免使用Logstash将日志行转换为旧格式?如果没有,如何删除前缀?哪个过滤器是最佳选择?

我当前的Filebeat配置如下所示:
 filebeat.inputs:
- type: log
paths:
- /var/log/web-1.log
json.keys_under_root: true
json.ignore_decoding_error: true
json.overwrite_keys: true
fields_under_root: true
fields:
environment: ${ENV_NAME:not_set}
app: myapp

cloud.id: "${ELASTIC_CLOUD_ID:not_set}"
cloud.auth: "${ELASTIC_CLOUD_AUTH:not_set}"

最佳答案

我会尝试利用 dissect decode_json_fields 处理器:

processors:
# first ignore the preamble and only keep the JSON data
- dissect:
tokenizer: "%{?ignore} %{+ignore} %{+ignore} %{+ignore} %{+ignore}: %{json}"
field: "message"
target_prefix: ""

# then parse the JSON data
- decode_json_fields:
fields: ["json"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: false
add_error_key: true

关于elasticsearch - 使用或不使用Logstash过滤Filebeat输入,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61266408/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com