gpt4 book ai didi

java - 如何在 Logstash 和 grok 过滤器中设置多行 java 堆栈跟踪?

转载 作者:太空宇宙 更新时间:2023-11-04 10:58:28 26 4
gpt4 key购买 nike

我正在尝试在 grok 过滤器中设置多行(我正在使用 Filebeats)以便解析 java 堆栈跟踪。

目前我能够解析以下日志:

08/12/2016 14:17:32,746 [ERROR] [nlp.rvp.TTEndpoint] (Thread-38 ActiveMQ-client-global-threads-1048949322) [d762103f-eee0-4dbb-965f-9f8fb500cf92] ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login
at nlp.exceptions.nlpException.NOT_FOUND(nlpException.java:147)
at nlp.utils.Dispatcher.forwardVersion1(Dispatcher.java:342)
at nlp.utils.Dispatcher.Forward(Dispatcher.java:189)
at nlp.utils.Dispatcher$Proxy$_$$_WeldSubclass.Forward$$super(Unknown Source)
at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)

但结果无法显示 java 堆栈跟踪(以 at java... 开头)

这是 Grok 调试器的输出(如您所见,缺少 java 堆栈跟踪):

{
"date": "08/12/2016",
"loglevel": "ERROR",
"logger": "nlp.rvp.TTEndpoint",
"time": "14:17:32,746",
"thread": "Thread-38 ActiveMQ-client-global-threads-1048949322",
"message": "ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login\r",
"uuid": "d762103f-eee0-4dbb-965f-9f8fb500cf92"
}

这是 Filebeats(日志传送器)的配置:

filebeat:

prospectors:

-
paths:
- /var/log/test
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["192.168.1.122:5044"]
bulk_max_size: 8192
compression_level: 3

tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB

这是Logstash的配置

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{DATE:date} %{TIME:time} \[%{LOGLEVEL:loglevel}%{SPACE}\] \[(?<logger>[^\]]+)\] \((?<thread>[^)]+)\) \[%{UUID:uuid}\] %{GREEDYDATA:message}" }
}
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

希望你能帮助我,所以最后,我会解决它(:谢谢!

最佳答案

谢谢大家,我已经找到解决办法了!

我的新配置是:

filebeat.yml

filebeat:
prospectors:
- type: log
paths:
- /var/log/*.log
multiline:
pattern: '^[[:space:]]'
match: after
output:
logstash:
hosts: ["xxx.xx.xx.xx:5044"]
bulk_max_size: 8192
compression_level: 3

tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB

关于java - 如何在 Logstash 和 grok 过滤器中设置多行 java 堆栈跟踪?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47145232/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com