gpt4 book ai didi

r - "Unnest"重叠时间间隔

转载 作者:行者123 更新时间:2023-12-04 08:22:56 26 4
gpt4 key购买 nike

我正在尝试为一组以超前/滞后方式运行的过滤器创建绘图。
关于超前/滞后的简短描述:
当一个新的过滤器上线时,它被置于滞后位置,这意味着水在通过初级(又名铅)过滤器后通过它。当超前过滤器堵塞时,当前滞后过滤器移动到超前位置。总而言之,过滤器从滞后位置开始,然后撞到领先位置。
在视觉上,你可以这样想象:
Filter Diagram
我需要做的是“取消嵌套”(因为没有更好的词)重叠的时间段。换句话说,我希望每个过滤器都有一个连续运行的时间戳,而不管它处于哪个领先/滞后位置。
数据结构如下:

data <- structure(list(record_timestamp = structure(c(1608192000, 1608192060, 1608192120, 1608192180, 1608192240, 1608192300, 1608192360, 1608192420, 1608192480, 1608192540, 1608192600, 1608192660, 1608192720, 1608192780, 1608192840, 1608192900, 1608192960, 1608193020, 1608193080, 1608193140, 1608193200, 1608193260, 1608193320, 1608193380, 1608193440, 1608193500, 1608193560, 1608193620, 1608193680, 1608193740, 1608193800), class = c("POSIXct", "POSIXt"), tzone = "UTC"), flow = c(20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10), lag_start = structure(c(1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192000, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260), class = c("POSIXct", "POSIXt"), tzone = "UTC"), lead_start = structure(c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260, 1608193260), class = c("POSIXct", "POSIXt"), tzone = "UTC"), changeout_interval = new("Interval",     .Data = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 660, 0, 0, 0, 0,     0, 0, 0, 0, 0, 600, 0, 0, 0, 0, 0, 0, 0, 0, 0, NA), start = structure(c(1608192000,     1608192000, 1608192000, 1608192000, 1608192000, 1608192000,     1608192000, 1608192000, 1608192000, 1608192000, 1608192000,     1608192660, 1608192660, 1608192660, 1608192660, 1608192660,     1608192660, 1608192660, 1608192660, 1608192660, 1608192660,     1608193260, 1608193260, 1608193260, 1608193260, 1608193260,     1608193260, 1608193260, 1608193260, 1608193260, 1608193260    ), tzone = "UTC", class = c("POSIXct", "POSIXt")), tzone = "UTC")), class = c("spec_tbl_df", "tbl_df", "tbl", "data.frame"), row.names = c(NA, -31L), spec = structure(list(    cols = list(record_timestamp = structure(list(), class = c("collector_character",     "collector")), flow = structure(list(), class = c("collector_double",     "collector")), polish_start = structure(list(), class = c("collector_character",     "collector")), lead_start = structure(list(), class = c("collector_character",     "collector"))), default = structure(list(), class = c("collector_guess",     "collector")), skip = 1), class = "col_spec"))
我对最终结果的设想是这样的:
end_data <- structure(list(record_timestamp = structure(c(1608192000, 1608192060,1608192120, 1608192180, 1608192240, 1608192300, 1608192360, 1608192420,1608192480, 1608192540, 1608192600, 1608192660, 1608192720, 1608192780,1608192840, 1608192900, 1608192960, 1608193020, 1608193080, 1608193140,1608193200, 1608192660, 1608192720, 1608192780, 1608192840, 1608192900,1608192960, 1608193020, 1608193080, 1608193140, 1608193200, 1608193260,1608193320, 1608193380, 1608193440, 1608193500, 1608193560,1608193620,1608193680, 1608193740, 1608193800), class = c("POSIXct", "POSIXt"), tzone = "UTC"), flow = c(20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), lag_start = structure(c(1608192000, 1608192000, 1608192000,1608192000, 1608192000, 1608192000, 1608192000, 1608192000,1608192000,1608192000, 1608192000, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660, 1608192660,1608192660, 1608192660, 1608192660, 1608192660, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), class = c("POSIXct", "POSIXt"), tzone = "UTC"), lead_start = structure(c(NA, NA, NA, NA, NA, NA, NA, NA,NA, NA, NA, 1608192660, 1608192660, 1608192660, 1608192660,1608192660, 1608192660, 1608192660, 1608192660, 1608192660,1608192660, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 1608193260,1608193260, 1608193260, 1608193260, 1608193260, 1608193260,1608193260, 1608193260, 1608193260, 1608193260), class = c("POSIXct","POSIXt"), tzone = "UTC"), filter_id = c(1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2)), class = c("spec_tbl_df",                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       "tbl_df", "tbl", "data.frame"), row.names = c(NA, -41L), spec = structure(list(cols = list(record_timestamp = structure(list(), class = c("collector_character","collector")), flow = structure(list(), class = c("collector_double","collector")), polish_start = structure(list(), class = c("collector_character","collector")), lead_start = structure(list(), class = c("collector_character", "collector")), filter_id = structure(list(), class = c("collector_double","collector"))), default = structure(list(), class = c("collector_guess","collector")), skip = 1), class = "col_spec"))
这将使时间戳加倍,但它可以更轻松地进行绘图,因为我可以 group_by在 filter_id 列上。
到目前为止,我所拥有的是每个过滤器的一组时间间隔,从开始到结束,直到滞后。这是代码:
intervals <-  data %>% 
distinct(lag_start, .keep_all = TRUE) %>%
mutate(changeout_interval = interval(lag_start, lead(lag_start, 2))) %>%
select(record_timestamp, changeout_interval)
从那里,我如何过滤每个间隔内的所有时间戳?几乎就像一个条件 pivot_longer .
最终目标是能够绘制过滤器的整个生命周期,包括超前和滞后,只需几行 ggplot2 .这是我对情节的设想:
grouped_data <- data %>%
group_by(lag_start) %>%
mutate(elapsed_time = difftime(record_timestamp,
record_timestamp[1],
units = "mins"),
total_flow = cumsum(flow))

ggplot(grouped_data, aes(x = elapsed_time, y = total_flow)) +
geom_line(aes(color = as.factor(lag_start)))
但是该图不包括每个过滤器变为领先位置时的流量。

最佳答案

使用 dense_ranklag_start 对过滤器进行分组然后为每个过滤器创建一个记录。这将宽格式的信息保留为 intervalend_data有不同的数据结构。

library(dplyr)
library(lubridate)

data %>%
select(-changeout_interval) %>% # example only as interval appeared to calculate this
mutate(filter_id = dense_rank(lag_start)) %>%
group_by(filter_id) %>%
slice(1) %>%
ungroup() %>%
mutate(lead_start = lead(lead_start), lead_end = lead(lead_start), changeout_interval = interval(lag_start, lead_end))

# A tibble: 3 x 7
record_timestamp flow lag_start lead_start filter_id lead_end
<dttm> <dbl> <dttm> <dttm> <int> <dttm>
1 2020-12-17 08:00:00 20 2020-12-17 08:00:00 2020-12-17 08:11:00 1 2020-12-17 08:21:00
2 2020-12-17 08:11:00 15 2020-12-17 08:11:00 2020-12-17 08:21:00 2 NA
3 2020-12-17 08:21:00 10 2020-12-17 08:21:00 NA 3 NA
更新以澄清问题的补充。使用与 dense_rank 相同的方法然后通过 pivot_longer 切换到长格式制作 cumsum要求更容易绘制。
library(dplyr)
library(tidyr)
library(ggplot2)

plot_data <- data %>%
select(-changeout_interval) %>% # example only as interval appeared to calculate this
mutate(filter_lag = dense_rank(lag_start),
filter_lead = filter_lag - 1) %>%
select(-lag_start, -lead_start) %>%
pivot_longer(cols = starts_with("filter_"),
names_to = "position",
names_prefix = "filter_",
values_to = "filter") %>%
filter(filter > 0) %>% # drops the starting filter as data shows no lead filter?
group_by(filter) %>%
mutate(elapsed_time = difftime(record_timestamp, record_timestamp[1], units = "mins"),
rolling_flow = cumsum(flow))
绘制 elapsed_timerolling_flow
ggplot(plot_data, aes(x = as.numeric(elapsed_time),
y = rolling_flow,
color = factor(filter))) +
geom_line()
plot

关于r - "Unnest"重叠时间间隔,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65412698/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com