gpt4 book ai didi

r - R中的并行foreach共享内存

转载 作者:行者123 更新时间:2023-12-03 11:01:50 27 4
gpt4 key购买 nike

问题描述:

我有一个大矩阵 c , 加载到 RAM 内存中。我的目标是通过并行处理对它进行只读访问。但是,当我创建连接时,我使用 doSNOW , doMPI , big.matrix等,使用的 ram 数量急剧增加。

有没有办法正确创建共享内存,所有进程都可以从中读取,而无需创建所有数据的本地副本?

例子:

libs<-function(libraries){# Installs missing libraries and then load them
for (lib in libraries){
if( !is.element(lib, .packages(all.available = TRUE)) ) {
install.packages(lib)
}
library(lib,character.only = TRUE)
}
}

libra<-list("foreach","parallel","doSNOW","bigmemory")
libs(libra)

#create a matrix of size 1GB aproximatelly
c<-matrix(runif(10000^2),10000,10000)
#convert it to bigmatrix
x<-as.big.matrix(c)
# get a description of the matrix
mdesc <- describe(x)
# Create the required connections
cl <- makeCluster(detectCores ())
registerDoSNOW(cl)
out<-foreach(linID = 1:10, .combine=c) %dopar% {
#load bigmemory
require(bigmemory)
# attach the matrix via shared memory??
m <- attach.big.matrix(mdesc)
#dummy expression to test data aquisition
c<-m[1,1]
}
closeAllConnections()

内存:
Ram usage during <code>foreach</code>
在上图中,您可能会发现内存增加了很多,直到 foreach结束并被释放。

最佳答案

我认为问题的解决方案可以从 foreach 的作者 Steve Weston 的帖子中看出。包裹,here .他在那里说:

The doParallel package will auto-export variables to the workers that are referenced in the foreach loop.



所以我认为问题是在你的代码中你的大矩阵 c在作业 c<-m[1,1] 中被引用.试试看 xyz <- m[1,1]相反,看看会发生什么。

这是一个带有文件支持的示例 big.matrix :
#create a matrix of size 1GB aproximatelly
n <- 10000
m <- 10000
c <- matrix(runif(n*m),n,m)
#convert it to bigmatrix
x <- as.big.matrix(x = c, type = "double",
separated = FALSE,
backingfile = "example.bin",
descriptorfile = "example.desc")
# get a description of the matrix
mdesc <- describe(x)
# Create the required connections
cl <- makeCluster(detectCores ())
registerDoSNOW(cl)
## 1) No referencing
out <- foreach(linID = 1:4, .combine=c) %dopar% {
t <- attach.big.matrix("example.desc")
for (i in seq_len(30L)) {
for (j in seq_len(m)) {
y <- t[i,j]
}
}
return(0L)
}

enter image description here
## 2) Referencing
out <- foreach(linID = 1:4, .combine=c) %dopar% {
invisible(c) ## c is referenced and thus exported to workers
t <- attach.big.matrix("example.desc")
for (i in seq_len(30L)) {
for (j in seq_len(m)) {
y <- t[i,j]
}
}
return(0L)
}
closeAllConnections()

enter image description here

关于r - R中的并行foreach共享内存,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31575585/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com