gpt4 book ai didi

r - 在大数据集中每个案例提交时计算未结案例的有效方法

转载 作者:行者123 更新时间:2023-12-04 11:49:38 25 4
gpt4 key购买 nike

在大型数据集(约 1M 个案例)中,每个案例都有一个“创建的”和一个“审查的” dateTime 。我想计算在创建每个案例时打开的其他案例的数量。案例在其“创建”和“审查” dataTimes 之间开放。

一些解决方案适用于小数据集(<100,000 个案例),但计算时间呈指数增长。我的估计是计算时间随着函数 3n^2 的增加而增加。通过 n=100,000 个案例,在我的具有 6 * 4GHz 内核和 64GB RAM 的服务器上,计算时间 > 20 分钟。即使使用多核库,充其量也可以将时间减少 8 或 10 倍。不足以处理大约 1M 种情况。

我正在寻找一种更有效的方法来进行此计算。下面我提供了一个函数,它允许您使用 dateTimedplyr 库轻松创建大量“创建”和“审查”data.table 对以及迄今为止尝试过的两种解决方案。为简单起见,将时间报告给用户。您可以简单地更改顶部的“CASE_COUNT”变量以重新执行和查看时间,并轻松比较您可能需要建议的其他解决方案的时间。

我将使用其他解决方案更新原始帖子,以适本地赞扬其作者。在此先感谢您的帮助!

# Load libraries used in this example
library(dplyr);
library(data.table);
# Not on CRAN. See: http://bioconductor.org/packages/release/bioc/html/IRanges.html
library(IRanges);

# Set seed for reproducibility
set.seed(123)

# Set number of cases & date range variables
CASE_COUNT <<- 1000;
RANGE_START <- as.POSIXct("2000-01-01 00:00:00",
format="%Y-%m-%d %H:%M:%S",
tz="UTC", origin="1970-01-01");
RANGE_END <- as.POSIXct("2012-01-01 00:00:00",
format="%Y-%m-%d %H:%M:%S",
tz="UTC", origin="1970-01-01");

# Select which solutions you want to run in this test
RUN_SOLUTION_1 <- TRUE; # dplyr::summarize() + comparisons
RUN_SOLUTION_2 <- TRUE; # data.table:foverlaps()
RUN_SOLUTION_3 <- TRUE; # data.table aggregation + comparisons
RUN_SOLUTION_4 <- TRUE; # IRanges::IRanges + countOverlaps()
RUN_SOLUTION_5 <- TRUE; # data.table::frank()

# Function to generate random creation & censor dateTime pairs
# The censor time always has to be after the creation time
# Credit to @DirkEddelbuettel for this smart function
# (https://stackoverflow.com/users/143305/dirk-eddelbuettel)

generate_cases_table <- function(n = CASE_COUNT, start_val=RANGE_START, end_val=RANGE_END) {
# Measure duration between start_val & end_val
duration <- as.numeric(difftime(end_val, start_val, unit="secs"));

# Select random values in duration to create start_offset
start_offset <- runif(n, 0, duration);

# Calculate the creation time list
created_list <- start_offset + start_val;

# Calculate acceptable time range for censored values
# since they must always be after their respective creation value
censored_range <- as.numeric(difftime(RANGE_END, created_list, unit="secs"));

# Select random values in duration to create end_offset
creation_to_censored_times <- runif(n, 0, censored_range);

censored_list <- created_list + creation_to_censored_times;

# Create and return a data.table with creation & censor values
# calculated from start or end with random offsets
return_table <- data.table(id = 1:n,
created = created_list,
censored = censored_list);

return(return_table);
}

# Create the data table with the desired number of cases specified by CASE_COUNT above
cases_table <- generate_cases_table();

solution_1_function <- function (cases_table) {
# SOLUTION 1: Using dplyr::summarize:

# Group by id to set parameters for summarize() function
cases_table_grouped <- group_by(cases_table, id);

# Count the instances where other cases were created before
# and censored after each case using vectorized sum() within summarize()

cases_table_summary <- summarize(cases_table_grouped,
open_cases_at_creation = sum((cases_table$created < created &
cases_table$censored > created)));
solution_1_table <<- as.data.table(cases_table_summary, key="id");
} # End solution_1_function

solution_2_function <- function (cases_table) {
# SOLUTION 2: Using data.table::foverlaps:

# Adapted from solution provided by @Davidarenburg
# (https://stackoverflow.com/users/3001626/david-arenburg)

# The foverlaps() solution tends to crash R with large case counts
# I suspect it has to do with memory assignment of the very large objects
# It maxes RAM on my system (64GB) before crashing, possibly attempting
# to write beyond its assigned memory limits.
# I'll submit a reproduceable bug to the data.table team since
# foverlaps() is pretty new and known to be occasionally unstable

if (CASE_COUNT > 50000) {
stop("The foverlaps() solution tends to crash R with large case counts. Not running.");
}

setDT(cases_table)[, created_dupe := created];
setkey(cases_table, created, censored);

foverlaps_table <- foverlaps(cases_table[,c("id","created","created_dupe"), with=FALSE],
cases_table[,c("id","created","censored"), with=FALSE],
by.x=c("created","created_dupe"))[order(i.id),.N-1,by=i.id];

foverlaps_table <- dplyr::rename(foverlaps_table, id=i.id, open_cases_at_creation=V1);

solution_2_table <<- as.data.table(foverlaps_table, key="id");
} # End solution_2_function

solution_3_function <- function (cases_table) {
# SOLUTION 3: Using data.table aggregation instead of dplyr::summarize

# Idea suggested by @jangorecki
# (https://stackoverflow.com/users/2490497/jangorecki)

# Count the instances where other cases were created before
# and censored after each case using vectorized sum() with data.table aggregation

cases_table_aggregated <- cases_table[order(id), sum((cases_table$created < created &
cases_table$censored > created)),by=id];

solution_3_table <<- as.data.table(dplyr::rename(cases_table_aggregated, open_cases_at_creation=V1), key="id");

} # End solution_3_function

solution_4_function <- function (cases_table) {
# SOLUTION 4: Using IRanges package

# Adapted from solution suggested by @alexis_laz
# (https://stackoverflow.com/users/2414948/alexis-laz)

# The IRanges package generates ranges efficiently, intended for genome sequencing
# but working perfectly well on this data, since POSIXct values are numeric-representable
solution_4_table <<- data.table(id = cases_table$id,
open_cases_at_creation = countOverlaps(IRanges(cases_table$created,
cases_table$created),
IRanges(cases_table$created,
cases_table$censored))-1, key="id");

} # End solution_4_function

solution_5_function <- function (cases_table) {
# SOLUTION 5: Using data.table::frank()

# Adapted from solution suggested by @danas.zuokas
# (https://stackoverflow.com/users/1249481/danas-zuokas)

n <- CASE_COUNT;

# For every case compute the number of other cases
# with `created` less than `created` of other cases
r1 <- data.table::frank(c(cases_table[, created], cases_table[, created]), ties.method = 'first')[1:n];

# For every case compute the number of other cases
# with `censored` less than `created`
r2 <- data.table::frank(c(cases_table[, created], cases_table[, censored]), ties.method = 'first')[1:n];

solution_5_table <<- data.table(id = cases_table$id,
open_cases_at_creation = r1 - r2, key="id");

} # End solution_5_function;

# Execute user specified functions;
if (RUN_SOLUTION_1)
solution_1_timing <- system.time(solution_1_function(cases_table));
if (RUN_SOLUTION_2) {
solution_2_timing <- try(system.time(solution_2_function(cases_table)));
cases_table <- select(cases_table, -created_dupe);
}
if (RUN_SOLUTION_3)
solution_3_timing <- system.time(solution_3_function(cases_table));
if (RUN_SOLUTION_4)
solution_4_timing <- system.time(solution_4_function(cases_table));
if (RUN_SOLUTION_5)
solution_5_timing <- system.time(solution_5_function(cases_table));

# Check generated tables for comparison
if (RUN_SOLUTION_1 && RUN_SOLUTION_2 && class(solution_2_timing)!="try-error") {
same_check1_2 <- all(solution_1_table$open_cases_at_creation == solution_2_table$open_cases_at_creation);
} else {same_check1_2 <- TRUE;}
if (RUN_SOLUTION_1 && RUN_SOLUTION_3) {
same_check1_3 <- all(solution_1_table$open_cases_at_creation == solution_3_table$open_cases_at_creation);
} else {same_check1_3 <- TRUE;}
if (RUN_SOLUTION_1 && RUN_SOLUTION_4) {
same_check1_4 <- all(solution_1_table$open_cases_at_creation == solution_4_table$open_cases_at_creation);
} else {same_check1_4 <- TRUE;}
if (RUN_SOLUTION_1 && RUN_SOLUTION_5) {
same_check1_5 <- all(solution_1_table$open_cases_at_creation == solution_5_table$open_cases_at_creation);
} else {same_check1_5 <- TRUE;}
if (RUN_SOLUTION_2 && RUN_SOLUTION_3 && class(solution_2_timing)!="try-error") {
same_check2_3 <- all(solution_2_table$open_cases_at_creation == solution_3_table$open_cases_at_creation);
} else {same_check2_3 <- TRUE;}
if (RUN_SOLUTION_2 && RUN_SOLUTION_4 && class(solution_2_timing)!="try-error") {
same_check2_4 <- all(solution_2_table$open_cases_at_creation == solution_4_table$open_cases_at_creation);
} else {same_check2_4 <- TRUE;}
if (RUN_SOLUTION_2 && RUN_SOLUTION_5 && class(solution_2_timing)!="try-error") {
same_check2_5 <- all(solution_2_table$open_cases_at_creation == solution_5_table$open_cases_at_creation);
} else {same_check2_5 <- TRUE;}
if (RUN_SOLUTION_3 && RUN_SOLUTION_4) {
same_check3_4 <- all(solution_3_table$open_cases_at_creation == solution_4_table$open_cases_at_creation);
} else {same_check3_4 <- TRUE;}
if (RUN_SOLUTION_3 && RUN_SOLUTION_5) {
same_check3_5 <- all(solution_3_table$open_cases_at_creation == solution_5_table$open_cases_at_creation);
} else {same_check3_5 <- TRUE;}
if (RUN_SOLUTION_4 && RUN_SOLUTION_5) {
same_check4_5 <- all(solution_4_table$open_cases_at_creation == solution_5_table$open_cases_at_creation);
} else {same_check4_5 <- TRUE;}


same_check <- all(same_check1_2, same_check1_3, same_check1_4, same_check1_5,
same_check2_3, same_check2_4, same_check2_5, same_check3_4,
same_check3_5, same_check4_5);

# Report summary of results to user
cat("This execution was for", CASE_COUNT, "cases.\n",
"It is", same_check, "that all solutions match.\n");
if (RUN_SOLUTION_1)
cat("The dplyr::summarize() solution took", solution_1_timing[3], "seconds.\n");
if (RUN_SOLUTION_2 && class(solution_2_timing)!="try-error")
cat("The data.table::foverlaps() solution took", solution_2_timing[3], "seconds.\n");
if (RUN_SOLUTION_3)
cat("The data.table aggregation solution took", solution_3_timing[3], "seconds.\n");
if (RUN_SOLUTION_4)
cat("The IRanges solution solution took", solution_4_timing[3], "seconds.\n");
if (RUN_SOLUTION_5)
cat("The data.table:frank() solution solution took", solution_5_timing[3], "seconds.\n\n");
data.table::foverlaps() 解决方案在较少情况下更快(<5000 左右;除 n 外还取决于随机性,因为它使用二进制搜索进行优化)。对于更多情况(> 5,000 左右), dplyr::summarize() 解决方案更快。远远超过 100,000,这两种解决方案都不可行,因为它们都太慢了。

编辑:根据@jangorecki 建议的想法添加了第三个解决方案,该解决方案使用 data.table 聚合而不是 dplyr::summarize() ,并且在其他方​​面类似于 dplyr 解决方案。对于多达约 50,000 个案例,它是最快的解决方案。超过 50,000 个案例, dplyr::summarize() 解决方案会稍微快一点,但不会快很多。可悲的是,对于 100 万个案例,它仍然不切实际。

EDIT2:添加了第四个解决方案,改编自@alexis_laz 建议的解决方案,该解决方案使用 IRanges 包及其 countOverlaps 函数。
它比其他 3 个解决方案要快得多。对于 50,000 个案例,它比解决方案 1 和 3 快了近 400%。

EDIT3:修改案例生成函数以正确行使“审查”条件。感谢@jangorecki 捕获了以前版本的限制。

EDIT4:重写以允许用户选择要执行的解决方案并在每次执行之前使用 system.time() 与垃圾收集进行时间比较以获得更准确的时间(根据@jangorecki 的敏锐观察) - 还添加了一些崩溃情况的条件检查。

EDIT5:添加了根据@danas.zuokas 使用 rank() 建议的解决方案改编的第五个解决方案。我的实验表明它总是至少比其他解决方案慢一个数量级。在 10,000 个案例中, dplyr::summarize 需要 44 秒,而 IRanges 需要 3.5 秒, as.numeric 解决方案需要 0.36 秒。

最终编辑:我对@danas.zuokas 建议的解决方案 5 进行了轻微修改,并与 @Khashaa 对类型的观察相匹配。我在 dataTime 生成函数中设置了 rank 类型,这大大加快了 integers 的运行速度,因为它在 doublesdateTime 而不是 ties.method='first' 对象上运行(也提高了其他函数的速度,但没有那么快)。通过一些测试,设置 data.table::frank 会产生与意图一致的结果。 base::rankIRanges::rankbit64::rank 都快。 data.table::frank 是最快的,但它似乎与 bit64 处理关系的方式不同,我无法让它根据需要处理它们。一旦 data.table::frank 被加载,它就会屏蔽大量的类型和函数,一路改变 data.table::frank 的结果。具体原因超出了本问题的范围。

POST END 注意:结果证明 POSIXct 可以有效地处理 dateTimes base::rank,而 IRanges::rankas.numeric 似乎都没有。因此,即使 as.integer (或 data.table::frank )类型设置也不需要 ties.method 并且转换不会损失精度,因此 dataTime 差异较少。
感谢所有做出贡献的人!我学到了很多!非常感激! :)
信用将包含在我的源代码中。

尾注:这个问题是 More efficient method for counting open cases as of creation time of each case 的一个改进和澄清的版本,具有更易于使用和更易读的示例代码 - 我在这里将它分开,以免过多的编辑压倒原始帖子,并简化大量 ojit_code 的创建示例代码中的对。这样,您就不必费力地回答。再次感谢!

最佳答案

答案根据问题作者的评论进行了更新。

我会建议使用等级的解决方案。表的创建方式为 a follow up to this question ,或使用 dateTime本问题中的对生成函数。两者都应该工作。

n <- cases_table[, .N]

# For every case compute the number of other cases
# with `created` less than `creation` of other cases
r1 <- data.table::frank(c(cases_table[, created], cases_table[, created]),
ties.method = 'first')[1:n]

# For every case compute the number of other cases
# with `censored` less than `created`
r2 <- data.table::frank(c(cases_table[, created], cases_table[, censored]),
ties.method = 'first')[1:n]

取差 r1 - r2 (-1 不需要 ties.method='first')给出结果(消除 created 的等级)。在效率方面,只需要在 cases_table 中找到该行数长度的向量的秩。 . data.table::frank Handlebars POSIXct dateTime对象最快 numeric对象(与 base::rank 不同),因此不需要类型转换。

关于r - 在大数据集中每个案例提交时计算未结案例的有效方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34245295/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com