gpt4 book ai didi

java - 使用 ANTLR 解析日志文件

转载 作者:行者123 更新时间:2023-11-29 03:42:45 28 4
gpt4 key购买 nike

我需要用 ANTLR 解析一个 Weblogic 日志文件。这是示例:

Tue Aug 28 09:39:09 MSD 2012 [test] [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] Alert - There is no user password credential mapper provider configured in your security realm. Oracle Service Bus service account management will be disabled. Configure a user password credential mapper provider if you need OSB service account support.

Sun Sep 02 23:13:00 MSD 2012 [test] [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] Warning - Timer (Checkpoint) has been triggered with a tick (205 873) that is less than or equal to the last tick that was received (205 873). This could happen in a cluster due to clock synchronization with the timer authority. The current trigger will be ignored, and operation will be skipped.
Mon Sep 03 10:35:54 MSD 2012 [test] [[ACTIVE] ExecuteThread: '19' for queue: 'weblogic.kernel.Default (self-tuning)'] Info -
[OSB Tracing] Inbound request was received.

Service Ref = Some/URL
URI = Another/URL
Message ID = u-u-i-d
Request metadata =
<xml-fragment>
<tran:headers xsi:type="http:HttpRequestHeaders" xmlns:http="http://www.bea.com/wli/sb/transports/http" xmlns:tran="http://www.bea.com/wli/sb/transports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<http:Accept-Encoding>gzip, deflate,gzip, deflate</http:Accept-Encoding>
<http:Connection>Keep-Alive</http:Connection>
<http:Content-Length>666</http:Content-Length>
<http:Content-Type>text/xml; charset=utf-8</http:Content-Type>
<http:Host>some.host.name</http:Host>
<http:SOAPAction>""</http:SOAPAction>
</tran:headers>
<tran:encoding xmlns:tran="http://www.bea.com/wli/sb/transports">utf-8</tran:encoding>
<http:client-host xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-host>
<http:client-address xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-address>
<http:http-method xmlns:http="http://www.bea.com/wli/sb/transports/http">POST</http:http-method>
</xml-fragment>
Payload =
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><XMLHere/></s:Envelope>

我对日志的这一部分感兴趣,必须忽略其他所有内容(应解析日期、服务引用值和信封 XML):

Sun Sep 02 23:13:00 MSD 2012 [test] [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] Warning - Timer (Checkpoint) has been triggered with a tick (205 873) that is less than or equal to the last tick that was received (205 873). This could happen in a cluster due to clock synchronization with the timer authority. The current trigger will be ignored, and operation will be skipped.
Mon Sep 03 10:35:54 MSD 2012 [test] [[ACTIVE] ExecuteThread: '19' for queue: 'weblogic.kernel.Default (self-tuning)'] Info -
[OSB Tracing] Inbound request was received.

Service Ref = Some/URL
URI = Another/URL
Message ID = u-u-i-d
Request metadata =
<xml-fragment>
<tran:headers xsi:type="http:HttpRequestHeaders" xmlns:http="http://www.bea.com/wli/sb/transports/http" xmlns:tran="http://www.bea.com/wli/sb/transports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<http:Accept-Encoding>gzip, deflate,gzip, deflate</http:Accept-Encoding>
<http:Connection>Keep-Alive</http:Connection>
<http:Content-Length>666</http:Content-Length>
<http:Content-Type>text/xml; charset=utf-8</http:Content-Type>
<http:Host>some.host.name</http:Host>
<http:SOAPAction>""</http:SOAPAction>
</tran:headers>
<tran:encoding xmlns:tran="http://www.bea.com/wli/sb/transports">utf-8</tran:encoding>
<http:client-host xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-host>
<http:client-address xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-address>
<http:http-method xmlns:http="http://www.bea.com/wli/sb/transports/http">POST</http:http-method>
</xml-fragment>
Payload =
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><XMLHere/></s:Envelope>

这是我的词法分析器:

lexer grammar LogLexer;

options {filter=true;}

/*------------------------------------------------------------------
* LEXER RULES
*------------------------------------------------------------------*/
LOGDATE : DAY ' ' MONTH ' ' NUMDAY ' ' NUMTIME ' ' TIMEZONE ' ' NUMYEAR;

METAINFO : '[' .* ']' ' [[' .* ']' .* ']' .* '-' .* '[OSB Tracing] Inbound request was received.';

SERVICE_REF : 'Service Ref = ';

URI : (SYMBOL | '/')+;

ENVELOPE_TAG : '<' ENVELOPE_TAGNAME .* '>' .* '</' ENVELOPE_TAGNAME '>';

fragment
ENVELOPE_TAGNAME : SYMBOL+ ':Envelope';

fragment
NUMTIME : NUM NUM ':' NUM NUM ':' NUM NUM;

fragment
TIMEZONE : SYMBOL SYMBOL SYMBOL;

fragment
DAY : 'Sun' | 'Mon' | 'Tue' | 'Wed' | 'Fri' | 'Sat';

fragment
MONTH : 'Sep' | 'Oct' | 'Nov' | 'Dec' | 'Feb' | 'Mar' | 'May' | 'Apr' | 'Jun' | 'Jul' | 'Aug';

fragment
NUMYEAR : NUM NUM NUM NUM;

fragment
NUMDAY : NUM NUM;

fragment
NUM : '0'..'9';

fragment
SYMBOL : ('a'..'z' | 'A'..'Z');

这是解析器(尚未完成):

grammar LogParser;

options {
tokenVocab = OSBLogLexer;
}

@header {
import java.util.List;
import java.util.ArrayList;
}

parse
returns [List<List<String>> entries]
@init {
$entries = new ArrayList<List<String>>();
}
: requestLogEntry+
{
$entries.add($requestLogEntry.logEntry);
};

requestLogEntry
returns [List<String> logEntry]
@init {
$logEntry = new ArrayList<String>();
}
: LOGDATE METAINFO .* serviceRef .* ENVELOPE_TAG
{
$logEntry.add($LOGDATE.getText());
$logEntry.add($serviceRef.serviceURI);
$logEntry.add($ENVELOPE_TAG.getText());
};

serviceRef
returns [String serviceURI]
: SERVICE_REF URI
{
$serviceURI = $URI.getText();
};

问题是它错误地解析了日志。我的代码不会忽略不需要的记录,所以我在结果列表中得到无效的 DATE 值:Tue Aug 28 09:39:09 MSD 2012(示例中的第一个)而不是 Mon Sep 03 10:35:54 MSD 2012(正确一个)。谁能帮帮我?

预先感谢您的回答。

更新

我已经更新了我的代码,但是我遇到了生成错误。看不出有什么问题。

更新的词法分析器:

lexer grammar LogLexer;

options {
filter=true;
}

TRASH : LOGDATE ' ' METAINFO (' ' | '\n')* { skip(); };

LOGDATE : DAY ' ' MONTH ' ' NUMDAY ' ' NUMTIME ' ' TIMEZONE ' ' NUMYEAR;

METAINFO : ('[' | ']' | SYMBOL | NUM | ' ' | SPECIAL)+;

OSB_METAINFO : (' ' | '\n')* '[OSB Tracing] Inbound request was received.';

SERVICE_REF : 'Service Ref = ';

URI : (SYMBOL | '/')+;

ENVELOPE_TAG : '<' ENVELOPE_TAGNAME .* '>' .* '</' ENVELOPE_TAGNAME '>';

fragment
OSB_TRACING : '[OSB Tracing] Inbound request was received.';

fragment
ENVELOPE_TAGNAME : SYMBOL+ ':Envelope';

fragment
NUMTIME : NUM NUM ':' NUM NUM ':' NUM NUM;

fragment
TIMEZONE : SYMBOL SYMBOL SYMBOL;

fragment
DAY : 'Sun' | 'Mon' | 'Tue' | 'Wed' | 'Fri' | 'Sat';

fragment
MONTH : 'Sep' | 'Oct' | 'Nov' | 'Dec' | 'Feb' | 'Mar' | 'May' | 'Apr' | 'Jun' | 'Jul' | 'Aug';

fragment
NUMYEAR : NUM NUM NUM NUM;

fragment
NUMDAY : NUM NUM;

fragment
NUM : '0'..'9';

fragment
SYMBOL : ('a'..'z' | 'A'..'Z');

fragment
SPECIAL : ( ~'\n' | '\'' | '.' | '(' | ')' | '-');

更新的解析器:

parser grammar LogParser;

options {
tokenVocab = LogLexer;
}

@header {
import java.util.List;
import java.util.ArrayList;
}

parse returns [List<List<String>> entries]
@init {
$entries = new ArrayList<List<String>>();
}
: requestLogEntry+
{
$entries.add($requestLogEntry.logEntry);
};

requestLogEntry
returns [List<String> logEntry]
@init {
$logEntry = new ArrayList<String>();
}
: LOGDATE ' ' METAINFO OSB_METAINFO .* serviceRef .* ENVELOPE_TAG
{
$logEntry.add($LOGDATE.getText());
$logEntry.add($serviceRef.serviceURI);
$logEntry.add($ENVELOPE_TAG.getText());
};

serviceRef
returns [String serviceURI]
: SERVICE_REF URI
{
$serviceURI = $URI.getText();
};

词法分析器生成错误:

[14:18:12] error(204): LogLexer.g:56:21: duplicate token type '\'' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:28: duplicate token type '.' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:34: duplicate token type '(' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:40: duplicate token type ')' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:46: duplicate token type '-' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:21: duplicate token type '\'' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:28: duplicate token type '.' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:34: duplicate token type '(' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:40: duplicate token type ')' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:46: duplicate token type '-' when collapsing subrule into set

这些错误似乎随机发生并随机消失(文件重命名)。 ANTLR 还从我的解析器文件中生成另一个词法分析器(这也是随机发生的)。我在 Windows 7 (x64) 上使用最后可用的 ANTLR3 和 ANTLRWorks。

最佳答案

Those errors seem to happen randomly and randomly dissappear (file rename).

不,它们不是随机发生的。错误源自规则:

fragment
SPECIAL : ( ~'\n' | '\'' | '.' | '(' | ')' | '-');

集合~'\n'已经匹配'\'' | '.' | '('|')'| '-'。你的意思可能是:

fragment
SPECIAL : ~('\n' | '\'' | '.' | '(' | ')' | '-');

Also ANTLR generates another lexer from my parser file (this also happens randomly). I am using last avaliable ANTLR3 and ANTLRWorks on Windows 7 (x64).

只有在您不指定语法类型时才会发生这种情况。例如:grammar T(所谓的组合语法)生成词法分析器和解析器,其中parser grammar Tlexer grammar T 只生成分别是解析器和词法分析器。我看到你一开始发了一个组合语法。 “额外的”词法分析器类可能是您拥有组合语法时的残余。

此外,请确保在您的解析器语法中使用任何文字标记! (从 requestLogEntry 规则中删除 ' ')。

关于java - 使用 ANTLR 解析日志文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12410496/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com