gpt4 book ai didi

hadoop - 使用mapreduce构建非结构化数据

转载 作者:行者123 更新时间:2023-12-02 21:48:39 25 4
gpt4 key购买 nike

我有一个日志文件,下面是近快照:

<Dec 12, 2013 2:46:24 AM CST> <Error> <java.rmi.RemoteException>
<Dec 13, 2013 2:46:24 AM CST> <Error> <Io exception>
<Dec 14, 2013 2:46:24 AM CST> <Error> <garbage data
garbage data
garbade data
Io exception
>
<jan 01, 2014 2:46:24 AM CST> <Error> <garbage data
garbage data java.rmi.RemoteException
>

我正在尝试在此基础上进行分析。

我想做的事:

我想每年获取异常(exception)计数
for Example: from above sample data my output should be

java.rmi.RemoteException 2013 1
Io exception 2013 2
java.rmi.RemoteException 2014 1

我的问题是什么:
1.You see hadoop processes line by line of a text file, so it considers Io exception as
a part of line 6 whereas it should be a part of line 3 (that is continued till line 7).

2. I can't use N line input formatter because ther's no fixed pattern of lines.

模式是什么,我想要什么:
The only pattern I see is that a line starts with a "<" and ends with a ">". In the 
above example line 3 doesn't end with ">" hence I want the compiler to consider all the
data in the same line until it fetches a ">".

我希望编译器看到的样本数据是:
<Dec 12, 2013 2:46:24 AM CST> <Error> <java.rmi.RemoteException>
<Dec 13, 2013 2:46:24 AM CST> <Error> <Io exception>
<Dec 14, 2013 2:46:24 AM CST> <Error> <garbage data garbage data garbade data Io exception>
<jan 01, 2014 2:46:24 AM CST> <Error> <garbage data garbage data java.rmi.RemoteException>

如果有人可以分享一段代码或想法来克服这个问题,我将感到很高兴。

提前致谢 :)

最佳答案

您将需要实现InputFormat&RecordReader。您真正需要的是对StreamInputFormat的改编。 hadoop流项目中存在此问题。

对于我们的多行XML用法,我们使用hadoop-straeming从定义的开始标记读取到结束标记。您可以检查源并使它适应您的要求。

关于hadoop - 使用mapreduce构建非结构化数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22938394/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com