gpt4 book ai didi

apache-spark - 时间序列处理的Spark Streaming(按时间间隔划分数据)

转载 作者:行者123 更新时间:2023-12-04 05:14:50 33 4
gpt4 key购买 nike

我从UDP套接字中获取数据流(nginx在线日志),数据结构是:

date                | ip       | mac   | objectName | rate | size
2016-04-05 11:17:34 | 10.0.0.1 | e1:e2 | book1 | 10 | 121
2016-04-05 11:17:34 | 10.0.0.2 | a5:a8 | book2351 | 8 | 2342
2016-04-05 11:17:34 | 10.0.0.3 | d1:b56| bookA5 | 10 | 12

2016-04-05 11:17:35 | 10.0.0.1 | e1:e2 | book67 | 10 | 768
2016-04-05 11:17:35 | 10.0.0.2 | a5:a8 | book2351 | 8 | 897
2016-04-05 11:17:35 | 10.0.0.3 | d1:b56| bookA5 | 9 | 34
2016-04-05 11:17:35 | 10.0.0.4 | c7:c2 | book99 | 9 | 924
...
2016-04-05 11:18:01 | 10.0.0.1 | e1:e2 | book-10 | 8 | 547547
2016-04-05 11:18:17 | 10.0.0.4 | c7:c2 | book99 | 10 | 23423
2016-04-05 11:18:18 | 10.0.0.3 | d1:b56| bookA5 | 10 | 1138

我必须:

  • 汇总数据,按分钟划分 - 一个结果行(分钟、ip、ma​​c)
  • objectName - 可以在几分钟内更改,我必须取第一个,即 2016-04-05 11:17:34 | 10.0.0.1 | e1:e2 book1 已更改为 book67,因此必须是 book1
  • rate - 分钟内变化率的计数
  • size - 大小之间的差异(分区内的先前时间,分区内的当前时间),即 2016-04-05 11:17:34 | 10.0.0.1 | e1:e2 = ... 768 - 121

所以,结果(没有计算大小):

date                | ip       | mac   | objectName | changes | size
2016-04-05 11:17:00 | 10.0.0.1 | e1:e2 | book1 | 0 | 768 - 121
2016-04-05 11:17:00 | 10.0.0.2 | a5:a8 | book2351 | 0 | 897 - 2342
2016-04-05 11:17:00 | 10.0.0.3 | d1:b56| bookA5 | 1 | 34 - 12
2016-04-05 11:17:00 | 10.0.0.4 | c7:c2 | book99 | 0 | 924
...
2016-04-05 11:18:00 | 10.0.0.1 | e1:e2 | book-10 | 0 | 547547
2016-04-05 11:18:00 | 10.0.0.4 | c7:c2 | book99 | 0 | 23423
2016-04-05 11:18:00 | 10.0.0.3 | d1:b56| bookA5 | 0 | 1138

这是我的代码快照,我知道 updateStateByKeywindow 但我不能特别理解,如何将数据刷新到数据库或文件系统,当周期(分钟)改变时:

private static final Duration SLIDE_INTERVAL = Durations.seconds(10);
private static final String nginxLogHost = "localhost";
private static final int nginxLogPort = 9999;
private class Raw {
LocalDate time; // full time with seconds
String ip;
String mac;
String objectName;
int rate;
int size;
}
private class Key {
LocalDate time; // time with 00 seconds
String ip;
String mac;
}
private class RawValue {
LocalDate time; // full time with seconds
String objectName;
int rate;
int size;
}
private class Value {
String objectName;
int changes;
int size;
}
public static void main(String[] args) {
SparkConf conf = new SparkConf().setMaster("local[4]").setAppName("TestNginxLog");
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
JavaStreamingContext jssc = new JavaStreamingContext(conf, SLIDE_INTERVAL);
jssc.checkpoint("/tmp");
JavaReceiverInputDStream<Raw> logRecords = jssc.receiverStream(new NginxUDPReceiver(nginxLogHost, nginxLogPort));
PairFunction<Raw, Key, RawValue> pairFunction = (PairFunction<Raw, Key, RawValue>) rawLine -> {
LocalDateTime time = rawLine.getDateTime();
Key k = new Key(LocalTime.of(time.getHour(), time.getMinute()), rawLine.getIp(), rawLine.getMac());
RawValue v = new RawValue(time, rawLine.getObjectName(), rawLine.getRate(), rawLine.getSize());
return new Tuple2<>(k, v);
};
JavaPairDStream<Key, RawValue> logDStream = logRecords.mapToPair(pairFunction);

最佳答案

这是部分答案,但问题尚未完成。在 mapToPair 之后我使用:

    // 1 key - N values
JavaPairDStream<Key, Iterable<Value>> abonentConnects = logDStream.groupByKey();

// Accumulate data
Function2<List<Iterable<Value>>, Optional<List<Value>>, Optional<List<Value>>> updateFunc = (Function2<List<Iterable<Value>>, Optional<List<Value>>, Optional<List<Value>>>) (values, previousState) -> {
List<Value> sum = previousState.or(new ArrayList<>());
for (Iterable<Value> v : values) {
v.forEach(sum::add);
}
return Optional.of(sum);
};
JavaPairDStream<Key, List<Value>> state = abonentConnects.updateStateByKey(updateFunc);

// filter data (previous minute)
Function<Tuple2<Key, List<Value>>, Boolean> filterFunc = (Function<Tuple2<Key, List<Value>>, Boolean>) v1 -> {
LocalDateTime previousTime = LocalDateTime.now().minusMinutes(1).withSecond(0).withNano(0);
LocalDateTime valueTime = v1._1().getTime();
return valueTime.compareTo(previousTime) == 0;
};
JavaPairDStream<Key, List<Value>> filteredRecords = state.filter(filterFunc);

// save data
filteredRecords.foreachRDD(x -> {
if (x.count() > 0) {
x.saveAsTextFile("/tmp/xxx/grouped/" + LocalDateTime.now().toString().replace(":", "-").replace(".", "-"));
}
});

streamingContext.start();
streamingContext.awaitTermination();

作为结果数据的产生,但由于操作每 5 秒执行一次,我每 5 秒就会得到相同的重复数据。
我知道,我必须使用 Optional.absent() 从流中清除保存的数据。我试过使用它,但无法合并成一个片段:将数据保存到文件系统或 HashMap |立即清除保存的数据。
问题:我该怎么做?

关于apache-spark - 时间序列处理的Spark Streaming(按时间间隔划分数据),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36420725/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com