gpt4 book ai didi

减少计算系数的处理时间

转载 作者:行者123 更新时间:2023-12-05 08:36:06 24 4
gpt4 key购买 nike

我有一个数据库和一个函数,从中我可以得到 coef值(通过 lm 函数计算得出)。有两种计算方法:第一种是如果我想要一个取决于 ID 的特定系数。 , dateCategory另一种方法是计算所有可能的 coef , 根据 subset_df1 .

代码有效。对于第一种方式,是瞬间计算,但是对于所有coefs的计算,如您所见,这需要相当长的时间。我用了 tictoc函数只是为了向您展示计算时间,它给出了633.38 sec elapsed .需要强调的重要一点是 df1不是这么小的数据库,而是为了所有coef的计算我过滤,在这种情况下是 subset_df1 .

我在代码中做了解释,这样你可以更好地理解我在做什么。这个想法是生成coef所有日期的值 >=date1 .

最后,我想尝试合理地减少计算所有 coef 的处理时间。值(value)观。

library(dplyr)
library(tidyr)
library(lubridate)
library(tictoc)

#database
df1 <- data.frame( Id = rep(1:5, length=900),
date1 = as.Date( "2021-12-01"),
date2= rep(seq( as.Date("2021-01-01"), length.out=450, by=1), each = 2),
Category = rep(c("ABC", "EFG"), length.out = 900),
Week = rep(c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
"Saturday", "Sunday"), length.out = 900),
DR1 = sample( 200:250, 900, repl=TRUE),
setNames( replicate(365, { sample(0:900, 900)}, simplify=FALSE),
paste0("DRM", formatC(1:365, width = 2, format = "d", flag = "0"))))

return_coef <- function(df1,idd,dmda,CategoryChosse) {

# First idea: Calculate the median of the values resulting from the subtraction between DR01 and the values of the DRM columns

subsetDRM<- df1 %>% select(starts_with("DRM"))

DR1_subsetDRM<-cbind (df1, setNames(df1$DR1 - subsetDRM, paste0(names(subsetDRM), "_PV")))

subset_PV<-select(DR1_subsetDRM,Id, date2,Week, Category, DR1, ends_with("PV"))

result_median<-subset_PV %>%
group_by(Id,Category,Week) %>%
dplyr::summarize(dplyr::across(ends_with("PV"), median),.groups = 'drop')

# Second idea: After obtaining the median, I add the values found with the values of the DRM columns of my df1 database.

Sum_DRM_result_median<-df1%>%
inner_join(result_median, by = c('Id','Category', 'Week')) %>%
mutate(across(matches("^DRM\\d+$"), ~.x + get(paste0(cur_column(), '_PV')),
.names = '{col}_{col}_PV')) %>%
select(Id:Category, DRM01_DRM01_PV:last_col())

Sum_DRM_result_median<-data.frame(Sum_DRM_result_median)

# Third idea: The idea here is to specifically filter a line from Sum_DRM_result_median, which will depend on what the user chooses, for that it will be necessary to choose an Id, date and Category.

# This code remove_values_0 I use because sometimes i have values equal to zero so i remove these columns ((this question was solved here: https://stackoverflow.com/questions/69452882/delete-column-depending-on-the-date-and-code-you-choose)
remove_values_0 <- df1 %>%
dplyr::filter(Id==idd,date2 == ymd(dmda), Category == CategoryChosse) %>%
select(starts_with("DRM")) %>%
pivot_longer(cols = everything()) %>%
arrange(desc(row_number())) %>%
mutate(cs = cumsum(value)) %>%
dplyr::filter(cs == 0) %>%
pull(name)
(dropnames <- paste0(remove_values_0,"_",remove_values_0, "_PV"))

filterid_date_category <- Sum_DRM_result_median %>%
filter(Id==idd,date2 == ymd(dmda), Category == CategoryChosse) %>%
select(-any_of(dropnames))

#Fourth idea: After selecting the corresponding row, I need to select the datas for coef calculation. For this, I delete some initial lines, which will depend on the day chosen.

datas <-filterid_date_category %>%
filter(Id==idd,date2 == ymd(dmda)) %>%
group_by(Category) %>%
summarize(across(starts_with("DRM"), sum),.groups = 'drop') %>%
pivot_longer(cols= -Category, names_pattern = "DRM(.+)", values_to = "val") %>%
mutate(name = readr::parse_number(name))
colnames(datas)[-1]<-c("days","numbers")

datas <- datas %>%
group_by(Category) %>%
slice((ymd(dmda) - min(as.Date(df1$date1) [
df1$Category == first(Category)])):max(days)+1) %>%
ungroup

# After I calculate the datas dataset, I used the lm function to obtain the coef value.

mod <- lm(numbers ~ I(days^2), datas)
coef<-coef(mod)[1]
val<-as.numeric(coef(mod)[1])

return(val)

}

计算coef具体的 ID , DateCategory在我的 df1数据库,我做的:

return_coef(df1,"2","2021-12-10","ABC")
[1] 209.262 # This value may vary, as the values ​​in my df1 database vary

计算所有coef ,我这样做:

tic()
subset_df1 <- subset(df1, date2 >= date1)

All<-subset_df1%>%
transmute(
Id,date2,Category,
coef = mapply(return_coef, list(cur_data()), Id, as.Date(date2), Category))
toc()
633.38 sec elapsed

最佳答案

您的代码中存在太多问题。我们需要从头开始。一般来说,这里有一些主要问题:

  1. 不要多次执行昂贵的操作。 pivot_**_join 之类的东西并不便宜,因为它们改变了整个数据集的结构。不要随意使用它们,就好像它们是免费提供的一样。

  2. 不要重复自己。我在你的函数中多次看到 filter(Id == idd, Category == ...) 。被过滤掉的行不会回来。这只会浪费计算能力,并使您的代码不可读。

  3. 编码前请三思。您似乎想要多个 idddate2Category 的回归结果。那么,该函数是应该设计为只接受标量输入,以便我们可以多次运行它,每次都涉及在相对较大的数据集上进行几次昂贵的数据操作,还是应该将它设计为接受向量输入,执行更少的操作,然后将它们全部返回立刻?这个问题的答案应该很清楚。

现在我将向您展示我将如何解决这个问题。步骤是

  1. 立即为每组idddmdaCategoryChosse 找到相关的子集。我们可以使用一个或两个连接来找到相应的子集。由于我们还需要计算每个 Week 组的中位数,因此我们还想为每个 dmda< 找到在同一 Week 组中的相应日期.

  2. 一劳永逸地将数据从宽转为长。使用行 ID 来保留行关系。调用包含这些“DRMXX”的列 day 和包含值的列 value

  3. 查找每个行 ID 是否存在尾随零。使用 rev(cumsum(rev(x)) != 0) 而不是长而低效的管道。

  4. 计算每组“Id”、“Category”、...、“day”和“Week”的中值调整值。在长数据格式中按组做事自然而高效。

  5. 聚合 Week 组。这直接来自您的代码,同时我们还将过滤掉小于每个 dmda 与对应的 date1 之间差异的 day每组。

  6. 为识别出的每组 IdCategorydmda 运行 lm

  7. 使用 data.table 提高效率。

  8. (可选)使用用 C++ 重写的不同的 median 函数,因为 base R 中的函数 (stats::median) 有点慢(stats::median 是考虑各种输入类型的通用方法,但在这种情况下我们只需要它来获取数字)。中值函数改编自 here .

下面显示了演示步骤的代码

Rcpp::sourceCpp(code = '
#include <Rcpp.h>

// [[Rcpp::export]]
double mediancpp(Rcpp::NumericVector& x, const bool na_rm) {
std::size_t m = x.size();
if (m < 1) Rcpp::stop("zero length vector not allowed.");
if (!na_rm) {
for (Rcpp::NumericVector::iterator i = x.begin(); i != x.end(); ++i)
if (Rcpp::NumericVector::is_na(*i)) return *i;
} else {
for (Rcpp::NumericVector::iterator i = x.begin(); i != x.begin() + m; )
Rcpp::NumericVector::is_na(*i) ? std::iter_swap(i, x.begin() + --m) : (void)++i;
}
if (m < 1) return x[0];

std::size_t n = m / 2;
std::nth_element(x.begin(), x.begin() + n, x.begin() + m);

return m % 2 ? x[n] : (x[n] + *std::max_element(x.begin(), x.begin() + n)) / 2.;
}
')

dt_return_intercept <- function(dt1, idd, dmda, category) {
# type checks
stopifnot(
data.table::is.data.table(dt1),
length(idd) == length(dmda),
length(idd) == length(category)
)
dmda <- switch(
class(dt1$date2),
character = as.character(dmda), Date = as.Date(dmda, "%Y-%m-%d"),
stop("non-comformable types between `dmda` and `dt1$date2`")
)
idd <- as(idd, class(dt1$Id))

# find subsets
DT <- data.table::setDT(list(Id = idd, date2 = dmda, Category = category, order = seq_along(idd)))
DT <- dt1[
dt1[DT, .(Id, Category, date2, Week, order), on = .NATURAL],
on = .(Id, Category, Week), allow.cartesian = TRUE
]
DT[, c("rowid", "date1", "date2", "i.date2") := c(
list(seq_len(.N)), lapply(.SD, as.Date, "%Y-%m-%d")
), .SDcols = c("date1", "date2", "i.date2")]

# pivot + type conversion
DT <- data.table::melt(DT, measure = patterns("DRM(\\d+)"), variable = "day")
DT[, `:=`(day = as.integer(sub("^\\D+", "", day)), value = as.numeric(value))]

# computations
DT[, keep := rev(cumsum(rev(value)) != 0), by = "rowid"]
DT[, value := value + mediancpp(DR1 - value, TRUE),
by = c("Id", "Category", "i.date2", "date1", "day", "Week")]
DT <- DT[date2 == i.date2 & keep & day > i.date2 - date1,
.(value = sum(value), order = order[[1L]]),
by = c("Id", "Category", "i.date2", "date1", "day")]
DT[, .(out = coef(lm(value ~ I(day^2), .SD))[[1L]], order = order[[1L]]), # coef(...)[[1L]] gives you the intercept, not the coefficient of day^2. Are you sure this is what you want?
by = c("Id", "Category", "i.date2")][order(order)]$out
}

基准

params <- (params <- unique(df1[df1$date1 <= df1$date2, c(1L, 3L, 4L)]))[sample.int(nrow(params), 20L), ]
dt1 <- data.table::setDT(data.table::copy(df1)) # nothing but a data.table version of `df1`
microbenchmark::microbenchmark(
mapply(function(x, y, z) return_coef(df1, x, y, z),
params$Id, params$date2, params$Category),
dt_return_intercept(dt1, params$Id, params$date2, params$Category),
dt_return_intercept_base(dt1, params$Id, params$date2, params$Category), # use stats::median instead of mediancpp
times = 10L, check = "equal"
)

结果如下。 check="equal" 不会引发错误。这意味着所有三个函数都返回相同的结果。此函数比使用 mediancepp 的函数快约 136 倍,比使用 stats::median 的函数快约 73 倍。为避免复制,mediancpp 通过引用获取其第一个参数。因此,需要谨慎使用。这种行为很适合这种情况,因为 DR1 - value 创建了一个不影响我们任何变量的临时对象。

Unit: milliseconds
expr min lq mean median uq max neval
mapply(function(x, y, z) return_coef(df1, x, y, z), params$Id, params$date2, params$Category) 11645.1729 11832.4373 11902.36716 11902.95195 11979.4154 12145.1154 10
dt_return_intercept(dt1, params$Id, params$date2, params$Category) 68.3173 72.4008 87.14596 75.24725 88.6007 167.2546 10
dt_return_intercept_base(dt1, params$Id, params$date2, params$Category) 153.9713 157.0826 163.18133 162.12175 167.2681 176.6866 10

关于减少计算系数的处理时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70698707/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com