gpt4 book ai didi

java - Hadoop MapReduce作业成功完成,但未向数据库写入任何内容

转载 作者:行者123 更新时间:2023-12-02 21:40:56 27 4
gpt4 key购买 nike

我正在写MR作业来挖掘Web服务器日志。作业的输入来自文本文件,输出进入MySQL数据库。问题是,作业成功完成,但未向数据库写入任何内容。我已经有一段时间没有进行MR编程了,所以很可能是我找不到的错误。我已经对单元进行了测试并可以正常工作,而不是模式匹配(请参见下文)。我想念什么?Mac OS X, Oracle JDK 1.8.0_31, hadoop-2.6.0注意:记录了异常,为简洁起见,我将其省略。

SkippableLogRecord:

public class SkippableLogRecord implements WritableComparable<SkippableLogRecord> {
// fields

public SkippableLogRecord(Text line) {
readLine(line.toString());
}
private void readLine(String line) {
Matcher m = PATTERN.matcher(line);

boolean isMatchFound = m.matches() && m.groupCount() >= 5;

if (isMatchFound) {
try {
jvm = new Text(m.group("jvm"));

Calendar cal = getInstance();
cal.setTime(new SimpleDateFormat(DATE_FORMAT).parse(m
.group("date")));

day = new IntWritable(cal.get(DAY_OF_MONTH));
month = new IntWritable(cal.get(MONTH));
year = new IntWritable(cal.get(YEAR));

String p = decode(m.group("path"), UTF_8.name());

root = new Text(p.substring(1, p.indexOf(FILE_SEPARATOR, 1)));
filename = new Text(
p.substring(p.lastIndexOf(FILE_SEPARATOR) + 1));
path = new Text(p);

status = new IntWritable(Integer.parseInt(m.group("status")));
size = new LongWritable(Long.parseLong(m.group("size")));
} catch (ParseException | UnsupportedEncodingException e) {
isMatchFound = false;
}
}

public boolean isSkipped() {
return jvm == null;
}

@Override
public void readFields(DataInput in) throws IOException {
jvm.readFields(in);
day.readFields(in);
// more code
}
@Override
public void write(DataOutput out) throws IOException {
jvm.write(out);
day.write(out);
// more code
}
@Override
public int compareTo(SkippableLogRecord other) {...}
@Override
public boolean equals(Object obj) {...}
}

映射器:
public class LogMapper extends
Mapper<LongWritable, Text, SkippableLogRecord, NullWritable> {
@Override
protected void map(LongWritable key, Text line, Context context) {
SkippableLogRecord rec = new SkippableLogRecord(line);

if (!rec.isSkipped()) {
try {
context.write(rec, NullWritable.get());
} catch (IOException | InterruptedException e) {...}
}
}
}

reducer :
public class LogReducer extends
Reducer<SkippableLogRecord, NullWritable, DBRecord, NullWritable> {
@Override
protected void reduce(SkippableLogRecord rec,
Iterable<NullWritable> values, Context context) {
try {
context.write(new DBRecord(rec), NullWritable.get());
} catch (IOException | InterruptedException e) {...}
}
}

DBRecord:
public class DBRecord implements Writable, DBWritable {
// fields

public DBRecord(SkippableLogRecord logRecord) {
jvm = logRecord.getJvm().toString();
day = logRecord.getDay().get();
// more code for rest of the fields
}

@Override
public void readFields(ResultSet rs) throws SQLException {
jvm = rs.getString("jvm");
day = rs.getInt("day");
// more code for rest of the fields
}

@Override
public void write(PreparedStatement ps) throws SQLException {
ps.setString(1, jvm);
ps.setInt(2, day);
// more code for rest of the fields
}
}

司机:
public class Driver extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();

DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", // driver
"jdbc:mysql://localhost:3306/aac", // db url
"***", // user name
"***"); // password

Job job = Job.getInstance(conf, "log-miner");

job.setJarByClass(getClass());
job.setMapperClass(LogMapper.class);
job.setReducerClass(LogReducer.class);
job.setMapOutputKeyClass(SkippableLogRecord.class);
job.setMapOutputValueClass(NullWritable.class);
job.setOutputKeyClass(DBRecord.class);
job.setOutputValueClass(NullWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(DBOutputFormat.class);

FileInputFormat.setInputPaths(job, new Path(args[0]));

DBOutputFormat.setOutput(job, "log", // table name
new String[] { "jvm", "day", "month", "year", "root",
"filename", "path", "status", "size" } // table columns
);

return job.waitForCompletion(true) ? 0 : 1;
}
public static void main(String[] args) throws Exception {
GenericOptionsParser parser = new GenericOptionsParser(
new Configuration(), args);

ToolRunner.run(new Driver(), parser.getRemainingArgs());
}
}

作业执行日志:
15/02/28 02:17:58 INFO mapreduce.Job:  map 100% reduce 100%
15/02/28 02:17:58 INFO mapreduce.Job: Job job_local166084441_0001 completed successfully
15/02/28 02:17:58 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=37074
FILE: Number of bytes written=805438
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=476788498
HDFS: Number of bytes written=0
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Map-Reduce Framework
Map input records=482230
Map output records=0
Map output bytes=0
Map output materialized bytes=12
Input split bytes=210
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=12
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=150
Total committed heap usage (bytes)=1381498880
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=171283337
File Output Format Counters
Bytes Written=0

最佳答案

为了回答我自己的问题,问题在于导致空格的空白,导致匹配器失败。单元测试未对前导空格进行测试,但实际的日志由于某些原因而具有那些。
上面发布的代码的另一个问题是,该类中的所有字段都是在readLine方法中初始化的。正如@ Anony-Mousse提到的那样,这很昂贵,因为Hadoop数据类型被设计为可重用。它还导致了序列化和反序列化的更大问题。 Hadoop尝试通过调用readFields重建类时,由于所有字段均为空,因此导致了NPE。
我还使用一些Java 8类和语法进行了其他较小的改进。最后,即使可以使用,我还是使用Spring Boot,Spring Data JPA以及Spring对@Async的异步处理支持重写了代码。

关于java - Hadoop MapReduce作业成功完成,但未向数据库写入任何内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28784479/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com