gpt4 book ai didi

用于 logstash 的 Nginx grok 模式

转载 作者:行者123 更新时间:2023-12-05 04:10:36 25 4
gpt4 key购买 nike

以下是我的Nginx日志格式

log_format timed_combined '$http_x_forwarded_for - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time $pipe';

以下为Nginx 日志入口(供引用)

- - test.user [26/May/2017:21:54:26 +0000] "POST /elasticsearch/_msearch HTTP/1.1" 200 263 "https://myserver.com/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" 0.020 0.008 .

以下是logstash grok模式

NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NUMBER:request_time} %{NUMBER:upstream_time}

在 logstash 日志中发现错误

"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [timestamp]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"26/May/2017:19:28:14 -0400\" is malformed at \"/May/2017:19:28:14 -0400\"

Issue: - Nginx logs are not getting grokked. 
Requirement: - Timestamp should be filtered into a particular field.

我的配置有什么问题?如何修复此错误?

最佳答案

这里是 nginx access.log 和 error.log 文件的模式。

filter {

############################# NGINX ##############################
if [event][module] == "nginx" {

########## access.log ##########
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:ip} - %{DATA:user_name} \[%{HTTPDATE:time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:agent}\""] }
remove_field => "message"
}
date {
match => ["time", "dd/MMM/YYYY:HH:mm:ss Z"]
target => "@timestamp"
remove_field => "time"
}
useragent {
source => "agent"
target => "user_agent"
remove_field => "agent"
}
geoip {
source => "ip"
target => "geoip"
}
}

########## error.log ##########
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:time} \[%{DATA:log_level}\] %{NUMBER:pid}#%{NUMBER:tid}: (\*%{NUMBER:connection_id} )?%{GREEDYDATA:messageTmp}"] }
remove_field => "message"
}
date {
match => ["time", "YYYY/MM/dd HH:mm:ss"]
target => "@timestamp"
remove_field => "time"
}

mutate {
rename => {"messageTmp" => "message"}
}
}

grok {
remove_field => "[event]"
}

mutate {
add_field => {"serviceName" => "nginx"}
}
}

同样适用于 tomcat:https://gist.github.com/petrov9/4740c61459a5dcedcef2f27c7c2900fd

关于用于 logstash 的 Nginx grok 模式,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44211469/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com