gpt4 book ai didi

R - 测试一对向量是否不相交的有效方法

转载 作者:行者123 更新时间:2023-12-03 14:56:38 25 4
gpt4 key购买 nike

我想知道两个向量是否有任何共同元素。我不在乎元素是什么,有多少共同元素,或者它们在任一向量中的位置。我只需要一个简单、高效的函数 EIC(vec1, vec2)如果 vec1 中都存在某个元素,则返回 TRUE和 vec2 , 如果没有两者共有的元素,则为 FALSE。我们也可以假设 vec1也不是 vec2包含 NA ,但两者都可能有重复的值。

我已经想到了五种方法来做到这一点,但它们似乎都效率低下:

EIC.1 <- function(vec1, vec2) length(intersect(vec1, vec2)) > 0
# I want a function that will stop when it finds the first
# common element between the vectors, and return TRUE. The
# intersect function will continue on and check whether there are
# any other common elements.

EIC.2 <- function(vec1, vec2) any(vec1 %in% vec2)

EIC.3 <- function(vec1, vec2) any(!is.na(match(vec1, vec2)))
# the match function goes to the trouble of finding the position
# of all matches; I don't need the position but just want to know
# if any exist

EIC.4 <- function(vec1, vec2) {
uvec1 <- unique(vec1)
uvec2 <- unique(vec2)
length(unique(c(uvec1, uvec2))) < length(uvec1) + length(uvec2)
}

EIC.5 <- function(vec1, vec2) !!anyDuplicated(c(unique(vec1), unique(vec2)))
# per https://stackoverflow.com/questions/5263498/how-to-test-whether-a-vector-contains-repetitive-elements#comment5931428_5263593
# I suspect this is the most efficient of the five, because
# anyDuplicated will stop looking when it comes to the first one,
# but I'm not sure about using !! to coerce to boolean type

我将使用很长的向量(没有任何 NA,如前所述)并将运行此函数数百万次,这就是我寻找有效方法的原因。下面是一些测试数据:
v1 <- c(9, 8, 75, 62)
v2 <- c(20, 75, 341, 987, 8)
v3 <- c(154, 62, 62, 143, 154, 95)
v4 <- c(12, 62, 12)

EIC <- EIC.1

EIC(v1, v2)
EIC(v1, v3)
EIC(v1, v4)
EIC(v2, v3)
EIC(v2, v4)
EIC(v3, v4)

正确的结果是 TRUE、TRUE、TRUE、FALSE、FALSE、TRUE。

最佳答案

我测试了我在问题中列出的五个函数(如@r2evans 建议的那样)。我使用了五个不同的数据集,因为我认为根据向量对是大部分不相交还是大部分不相交,性能可能会有所不同。 (事实证明,EIC.1 到 EIC.4 没有太大区别;至于 EIC.5,如果大多数对不相交,它运行得更慢。)

这是我生成数据集的方式:

n=1400L

a1 <- replicate(n, sample(5000000L, 500L, replace = TRUE), simplify = FALSE)
b1 <- replicate(n, sample(5000000L, 2500L, replace = TRUE), simplify = FALSE)
# two lists of vectors, to be compared pairwise, where about 22% of the pairs have elements in common

a2 <- replicate(n, sample(800000L, 500L, replace = TRUE), simplify = FALSE)
b2 <- replicate(n, sample(800000L, 2500L, replace = TRUE), simplify = FALSE)
# two lists of vectors, to be compared pairwise, where about 79% of the pairs have elements in common

a3 <- replicate(n, sample(3250000L, 1500L, replace = TRUE), simplify = FALSE)
b3 <- replicate(n, sample(3250000L, 1500L, replace = TRUE), simplify = FALSE)
# two lists of vectors, equal in length, to be compared pairwise, where about 50% of the pairs have elements in common

这是我的结果:
library(microbenchmark)

LL <- c(expression(sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]]))),
expression(sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]]))),
expression(sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]]))),
expression(sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]]))),
expression(sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]]))) )

v1 <- a1
v2 <- b1
microbenchmark(list=LL)

Unit: milliseconds
expr min lq mean median uq max neval
sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]])) 110.59374 110.98621 113.5366 112.52576 114.4162 130.0801 100
sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]])) 97.18203 97.64194 101.4938 99.20129 101.6032 158.8913 100
sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]])) 96.98262 98.73502 100.5121 99.06029 100.6465 136.2520 100
sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]])) 255.85385 256.67103 262.0515 258.23332 265.1787 291.9498 100
sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]])) 230.49910 231.25642 236.2385 233.05208 237.7731 280.7453 100

v1 <- a2
v2 <- b2
microbenchmark(list=LL)

Unit: milliseconds
expr min lq mean median uq max neval
sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]])) 112.40455 112.78578 114.8205 114.4925 114.9898 126.2302 100
sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]])) 98.45717 98.87847 101.7272 100.5070 101.0258 134.8737 100
sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]])) 98.15024 98.59084 101.1340 100.2553 101.2907 131.4896 100
sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]])) 258.48673 259.18759 264.2449 260.1710 265.2686 307.0624 100
sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]])) 200.79988 201.52592 205.8434 203.3817 207.2203 244.2715 100

v1 <- a3
v2 <- b3
microbenchmark(list=LL)

Unit: milliseconds
expr min lq mean median uq max neval
sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]])) 134.0820 134.5529 135.4400 134.6922 135.6203 142.1575 100
sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]])) 119.7959 120.1119 122.3887 120.2729 122.2338 158.0306 100
sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]])) 119.7705 120.2145 122.3458 121.9361 122.4224 150.4227 100
sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]])) 257.0928 259.0730 263.2403 259.6671 263.7227 318.9604 100
sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]])) 226.4821 227.0798 230.2878 228.4882 231.3292 258.4599 100

v1 <- b1 # the longer vector is now vec1
v2 <- a1
microbenchmark(list=LL)

Unit: milliseconds
expr min lq mean median uq max neval
sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]])) 199.2799 201.3817 202.5054 201.6378 202.7534 214.8660 100
sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]])) 187.5226 187.9299 188.9177 188.1184 189.8541 196.1020 100
sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]])) 187.8891 188.3417 190.5641 190.1809 190.8307 219.4735 100
sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]])) 255.1007 255.8905 260.1282 256.8316 262.1560 288.4900 100
sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]])) 237.7409 238.4515 241.5251 239.9415 243.5631 266.5916 100

v1 <- b2
v2 <- a2
microbenchmark(list=LL)

Unit: milliseconds
expr min lq mean median uq max neval
sapply(1:n, function(k) EIC.1(v1[[k]], v2[[k]])) 198.8747 201.2476 202.1573 201.5215 202.3886 207.7772 100
sapply(1:n, function(k) EIC.2(v1[[k]], v2[[k]])) 185.5260 185.7983 187.8099 185.9842 188.3947 225.7553 100
sapply(1:n, function(k) EIC.3(v1[[k]], v2[[k]])) 185.8022 186.1824 188.8937 187.9226 188.6763 221.2442 100
sapply(1:n, function(k) EIC.4(v1[[k]], v2[[k]])) 257.6607 258.5063 262.3677 259.6778 264.6313 304.4813 100
sapply(1:n, function(k) EIC.5(v1[[k]], v2[[k]])) 230.5553 231.3261 233.9914 232.9138 235.0349 260.4950 100

在所有情况下,EIC.2 和 EIC.3 最快(并且彼此非常接近),EIC.1 紧随其后。但是请注意,如果较短的向量在前,则它们两者的效率都会更高。例如,其中 vec1a1 (长度 500)和 vec2b1 (长度为 2500),EIC.2 的中位数为 99 毫秒。但是当我切换它们时 vec1b1vec2a1 , EIC.2 减慢到 188 毫秒。因此,为了提高效率,在调用 EIC.2 之前,值得检查哪个向量更长。 (或者重写 EIC.2,以便它始终测试 [较短的向量] %in% [较长的向量]。)

关于R - 测试一对向量是否不相交的有效方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52941157/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com