gpt4 book ai didi

r - 使用 data.table 有效处理 by group 中的重复值

转载 作者:行者123 更新时间:2023-12-02 04:22:54 24 4
gpt4 key购买 nike

从按组重复(即每行中的相同值)的列 ( variable) 中获取单个值的首选方法是什么?我应该使用 variable[1] 吗?或者我应该在 by 语句中包含该变量并使用 .BY$variable ?假设我希望返回值包含 variable作为专栏。

从下面的测试中可以很清楚地看出,在 by 中放置了额外的变量。语句减慢了速度,甚至降低了通过该新变量进行键控的成本(或使用技巧告诉 data.table 不需要额外的键控)。为什么额外的已键入 by变量减慢速度?

我想我曾希望包括已经键入的 by variables 将是一个方便的语法技巧,可以将这些变量包含在返回的 data.table 中,而无需在 j 中明确命名它们。声明,但这似乎是不可取的,因为即使它们已经被键入,也会有一些与变量附加相关的开销。所以我的问题是,是什么导致了这种开销?

一些示例数据:

library(data.table)
n <- 1e8
y <- data.table(sample(1:5,n,replace=TRUE),rnorm(n),rnorm(n))
y[,sumV2:=sum(V2),keyby=V1]

时间显示使用 variable[1] 的方法(在这种情况下, sumV2[1] )更快。

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),keyby=list(V1,sumV2)])

我想这并不奇怪,因为 data.table无法知道由 setkey(V1) 和 setkey(V1,sumV2) 定义的组实际上是相同的。

令我感到惊讶的是,即使 data.table 的关键字是 setkey(V1,sumV2) (我们完全忽略设置新 key 所需的时间),使用 sumV2[1]还是更快。这是为什么?

x <- copy(y)
setkey(x,V1,sumV2)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),by=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)])

此外,完成 setkey(x,V2,sumV2) 所需的时间是不可忽略的。有什么方法可以通过告诉 data.table key 实际上没有发生实质性变化来欺骗 data.table 跳过实际重新键入 x 吗?

x <- copy(y)
system.time(setkey(x,V1,sumV2))

回答我自己的问题,似乎我们可以通过分配“已排序”属性来设置键时跳过排序。这是允许的吗?它会破坏东西吗?

x <- copy(y)
system.time({
setattr(x, "sorted", c("V1","sumV2"))
x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

我不知道这是不好的做法还是可能破坏事物。但是使用 setattr欺骗比显式键控快得多:

x <- copy(y)
system.time({
setkey(x,V1,sumV2)
x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

但即使使用 setattr欺骗结合使用 sumV2在 by 声明中仍然不如离开快 sumV2完全脱离 by 语句:

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])

在我看来,通过属性设置 key 并在每个组中使用 sumV2 作为长度为 1 的变量应该比仅在 V1 上键入并使用 sumV2[1] 更快。如果sumV2未指定为 by变量,然后是 sumV2 中重复值的整个向量在子集化为 sumV2[1] 之前需要为每个组生成.将此与 sumV2 时进行比较是 by变量,sumV2 只有一个长度为 1 的向量在每个组中。显然我在这里的推理是不正确的。谁能解释为什么?为什么是sumV2[1]是最快的选择,甚至与制作 sumV2 相比a 使用 setattr 后的变量诡计?

顺便说一句,我很惊讶地得知使用 attr<-不慢于 setattr (都是瞬时的,意味着根本没有复制)。这与我对 base R foo<- 的理解相反函数复制数据。

x <- copy(y)
system.time(setattr(x, "sorted", c("V1","sumV2")))
x <- copy(y)
system.time(attr(x,"sorted") <- c("V1","sumV2"))

相关SessionInfo()用于这个问题:

data.table version 1.12.2
R version 3.5.3

最佳答案

好吧,我没有很好的技术答案,但我想我已经在 options(datatable.verbose=TRUE)

的帮助下从概念上解决了这个问题

创建数据

library(data.table)
n <- 1e8

y_unkeyed_5groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_5groups[,sumV2:=sum(V2),keyby=V1]
y_unkeyed_10000groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups[,sumV2:=sum(V2),keyby=V1]

慢跑

x <- copy(y)
system.time({
setattr(x, "sorted", c("V1","sumV2"))
x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})
# Detected that j uses these columns: V3,V2
# Finding groups using uniqlist on key ... 1.050s elapsed (1.050s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/.BY$sumV2)'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# memcpy contiguous groups took 0.305s for 6 groups
# eval(j) took 0.254s for 6 calls
# 0.560s elapsed (0.510s cpu)
# user system elapsed
# 1.81 0.09 1.72

跑得快

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
# Detected that j uses these columns: V3,V2,sumV2
# Finding groups using uniqlist on key ... 0.060s elapsed (0.070s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/sumV2[1], sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# memcpy contiguous groups took 0.328s for 6 groups
# eval(j) took 0.291s for 6 calls
# 0.610s elapsed (0.580s cpu)
# user system elapsed
# 1.08 0.08 0.82

finding groups 部分是造成差异的原因。我猜这里发生的事情是设置 key 实际上只是排序(我应该从属性的命名方式中猜到!)并且实际上并没有做任何事情来定义组的开始位置和结尾。因此,即使 data.table 知道 sumV2 已排序,它也不知道它们都是相同的值,因此必须找到 sumV2 中的组所在的位置 开始和结束。

我的猜测是,在技术上可以编写 data.table,其中键控不仅排序,而且实际上将每个组的开始/结束行存储在键控变量中,但是这可能会为包含大量组的 data.tables 占用大量内存。

知道了这一点,似乎建议不要一遍又一遍地重复相同的 by 语句,而是在一个 by 语句中完成您需要做的所有事情。总体而言,这可能是一个很好的建议,但对于少数群体而言并非如此。请参见以下反例:

我以我认为使用 data.table 的最快方式重写了它(只有一个 by 语句,并使用了 GForce):

library(data.table)
n <- 1e8
y_unkeyed_5groups <- data.table(sample(1:5,n, replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups <- data.table(sample(1:10000,n, replace=TRUE),rnorm(n),rnorm(n))

x <- copy(y_unkeyed_5groups)
system.time({
x[, product:=V3*V2]
outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
setkey(x,V1)
x[outDT,sumV2:=sumV2,all=TRUE]
x[,product:=NULL]
outDT
})

# Detected that j uses these columns: V3,V2
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product
# Finding groups using forderv ... 0.350s elapsed (0.810s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 1.610s elapsed (4.550s cpu)
# Detected that j uses these columns: sumProduct,sumV2
# Assigning to all 5 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 0.98 sec
# reorder took 3.35 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu)
# Detected that j uses these columns: sumV2
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product
# Assigning to all 100000000 rows
# user system elapsed
# 11.00 1.75 5.33


x2 <- copy(y_unkeyed_5groups)
system.time({
x2[,sumV2:=sum(V2),keyby=V1]
outDT2 <- x2[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})
# Detected that j uses these columns: V2
# Finding groups using forderv ... 0.310s elapsed (0.700s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# collecting discontiguous groups took 0.714s for 5 groups
# eval(j) took 0.079s for 5 calls
# 1.210s elapsed (1.160s cpu)
# setkey() after the := with keyby= ... forder took 1.03 sec
# reorder took 3.21 sec
# 1.600s elapsed (3.700s cpu)
# Detected that j uses these columns: sumV2,V3,V2
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# memcpy contiguous groups took 0.347s for 5 groups
# eval(j) took 0.265s for 5 calls
# 0.630s elapsed (0.620s cpu)
# user system elapsed
# 6.57 0.98 3.99

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

好吧,事实证明,当只有 5 个组时,通过不重复语句和使用 GForce 获得的效率并不重要。但是对于更多的群体来说,这确实有所不同,(尽管我没有以一种方式来区分仅使用一个 by 语句而不是 GForce 的好处与使用 GForce 和多个 by 语句的好处):

x <- copy(y_unkeyed_10000groups)
system.time({
x[, product:=V3*V2]
outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
setkey(x,V1)
x[outDT,sumV2:=sumV2,all=TRUE]
x[,product:=NULL]
outDT
})
#
# Detected that j uses these columns: V3,V2
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product
# Finding groups using forderv ... 0.740s elapsed (1.220s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 0.810s elapsed (2.390s cpu)
# Detected that j uses these columns: sumProduct,sumV2
# Assigning to all 10000 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 1.97 sec
# reorder took 11.95 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu)
# Detected that j uses these columns: sumV2
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product
# Assigning to all 100000000 rows
# user system elapsed
# 18.37 2.30 7.31

x2 <- copy(y_unkeyed_10000groups)
system.time({
x2[,sumV2:=sum(V2),keyby=V1]
outDT2 <- x[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})

# Detected that j uses these columns: V2
# Finding groups using forderv ... 0.770s elapsed (1.490s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# collecting discontiguous groups took 1.792s for 10000 groups
# eval(j) took 0.111s for 10000 calls
# 3.960s elapsed (3.890s cpu)
# setkey() after the := with keyby= ... forder took 1.62 sec
# reorder took 13.69 sec
# 4.660s elapsed (14.4s cpu)
# Detected that j uses these columns: sumV2,V3,V2
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu)
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu)
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ...
# memcpy contiguous groups took 0.395s for 10000 groups
# eval(j) took 0.284s for 10000 calls
# 0.690s elapsed (0.650s cpu)
# user system elapsed
# 20.49 1.67 10.19

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

更一般地说,data.table 非常快,但为了提取最快速、最有效的计算以充分利用底层 C 代码,您需要特别注意 data.table 的内部工作原理。我最近了解了 data.table 中的 GForce 优化,当有 by 语句时,似乎特定形式的 j 语句(涉及简单函数,如 mean 和 sum)直接在 C 中解析和执行。

关于r - 使用 data.table 有效处理 by group 中的重复值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58142097/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com