gpt4 book ai didi

java - Logstash TCP 输入与 JSON 编解码器将每一行视为一个单独的事件

转载 作者:行者123 更新时间:2023-11-30 03:00:26 25 4
gpt4 key购买 nike

我正在尝试使用logstash JSON编解码器通过Logstash TCP套接字读取log4j v2.3 JSON输出,但Logstash将每一行视为要索引的单独事件,而不是将每个JSON对象作为事件读取。

log4j 配置

<Appenders>
<Console name="console" target="SYSTEM_OUT">
<PatternLayout pattern="%d %p [%c] - &lt;%m&gt;%n"/>
</Console>
... removed for brevity ...
<Socket name="logstash" host="localhost" port="4560">
<JSONLayout />
</Socket>
</Appenders>
<Loggers>
<Logger name="org.jasig" level="info" additivity="false">
<AppenderRef ref="console"/>
<AppenderRef ref="file"/>
<AppenderRef ref="logstash"/>
</Logger>
... removed for brevity ...
<Root level="error">
<AppenderRef ref="console"/>
<AppenderRef ref="logstash"/>
</Root>
</Loggers>

Logstash 配置

input {
tcp {
port => 4560
codec => json
}
}
output {
elasticsearch {}
stdout {}
}

Logstash 输出

每一行都被解析为一个单独的事件,而不是将整个 JSON 对象视为单个事件。

2016-03-22T01:24:27.213Z 127.0.0.1 {
2016-03-22T01:24:27.215Z 127.0.0.1 "timeMillis" : 1458609867060,
2016-03-22T01:24:27.216Z 127.0.0.1 "thread" : "localhost-startStop-1",
2016-03-22T01:24:27.217Z 127.0.0.1 "level" : "INFO",
2016-03-22T01:24:27.218Z 127.0.0.1 "loggerName" : "com.hazelcast.instance.DefaultAddressPicker",
2016-03-22T01:24:27.219Z 127.0.0.1 "message" : "[LOCAL] [dev] [3.5] Resolving domain name 'wozniak.local' to address(es): [192.168.0.16, fe80:0:0:0:6203:8ff:fe89:6d3a%4]\n",
2016-03-22T01:24:27.220Z 127.0.0.1 "endOfBatch" : false,
2016-03-22T01:24:27.221Z 127.0.0.1 "loggerFqcn" : "org.apache.logging.slf4j.Log4jLogger"
2016-03-22T01:24:27.222Z 127.0.0.1 }
2016-03-22T01:24:32.281Z 127.0.0.1 {
2016-03-22T01:24:32.283Z 127.0.0.1 "timeMillis" : 1458609872279,
2016-03-22T01:24:32.286Z 127.0.0.1 "thread" : "localhost-startStop-1",
2016-03-22T01:24:32.287Z 127.0.0.1 "level" : "WARN",
2016-03-22T01:24:32.289Z 127.0.0.1 "loggerName" : "com.hazelcast.instance.DefaultAddressPicker",
2016-03-22T01:24:32.294Z 127.0.0.1 "message" : "[LOCAL] [dev] [3.5] Cannot resolve hostname: 'Jons-MacBook-Pro-2.local'\n",
2016-03-22T01:24:32.299Z 127.0.0.1 "endOfBatch" : false,
2016-03-22T01:24:32.302Z 127.0.0.1 "loggerFqcn" : "org.apache.logging.slf4j.Log4jLogger"
2016-03-22T01:24:32.307Z 127.0.0.1 }

预先感谢您的帮助。

最佳答案

嗯,我已经成功了。这不是我想要的解决方案,但它确实有效。

我没有使用 json 编解码器,而是使用 multiline 编解码器进行输入和 json 过滤器。

logstash 配置

input {
tcp {
port => 4560
codec => multiline {
pattern => "^\{$"
negate => true
what => previous
}
}
}

filter {
json { source => message }
}

output {
elasticsearch {}
stdout {}
}

这是正确索引的输出

2016-03-22T09:42:26.880Z 127.0.0.1 0 expired tickets found to be removed.
2016-03-22T09:43:26.992Z 127.0.0.1 Finished ticket cleanup.
2016-03-22T09:43:47.120Z 127.0.0.1 Setting path for cookies to: /cas/
2016-03-22T09:43:47.122Z 127.0.0.1 AcceptUsersAuthenticationHandler successfully authenticated hashbrowns+password
2016-03-22T09:43:47.131Z 127.0.0.1 Authenticated hashbrowns with credentials [hashbrowns+password].
2016-03-22T09:43:47.186Z 127.0.0.1 Audit trail record BEGIN
=============================================================
WHO: hashbrowns+password
WHAT: supplied credentials: [hashbrowns+password]
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: CAS
WHEN: Tue Mar 22 05:43:47 EDT 2016
CLIENT IP ADDRESS: 0:0:0:0:0:0:0:1
SERVER IP ADDRESS: 0:0:0:0:0:0:0:1
=============================================================

这看起来有点脆弱,因为它依赖于 log4j 格式化 json 的方式,所以我仍然很想听听如何让 json 编解码器与多行 json 输出一起工作。

关于java - Logstash TCP 输入与 JSON 编解码器将每一行视为一个单独的事件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36145146/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com