gpt4 book ai didi

r - 如何在R中使用getURL()优化抓取

转载 作者:行者123 更新时间:2023-12-04 16:14:53 25 4
gpt4 key购买 nike

我正试图从法国下议院议会网站的两页上刮掉所有法案。这些页面涵盖2002-2012年,每张代表不到1,000张钞票。

为此,我通过以下循环使用getURL进行了抓取:

b <- "http://www.assemblee-nationale.fr" # base
l <- c("12","13") # legislature id

lapply(l, FUN = function(x) {
print(data <- paste(b, x, "documents/index-dossier.asp", sep = "/"))

# scrape
data <- getURL(data); data <- readLines(tc <- textConnection(data)); close(tc)
data <- unlist(str_extract_all(data, "dossiers/[[:alnum:]_-]+.asp"))
data <- paste(b, x, data, sep = "/")
data <- getURL(data)
write.table(data,file=n <- paste("raw_an",x,".txt",sep="")); str(n)
})

有什么方法可以优化 getURL()函数吗?我似乎无法通过传递 async=TRUE选项来使用并发下载,这每次都会给我带来相同的错误:
Error in function (type, msg, asError = TRUE)  : 
Failed to connect to 0.0.0.12: No route to host

有任何想法吗?谢谢!

最佳答案

尝试使用mclapply {multicore}而不是lapply。

"mclapply is a parallelized version of lapply, it returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X." (http://www.rforge.net/doc/packages/multicore/mclapply.html)



如果这样不起作用,则可以使用 XML软件包获得更好的性能。诸如xmlTreeParse之类的函数使用异步调用。

"Note that xmlTreeParse does allow a hybrid style of processing that allows us to apply handlers to nodes in the tree as they are being converted to R objects. This is a style of event-driven or asynchronous calling." (http://www.inside-r.org/packages/cran/XML/docs/xmlEventParse)

关于r - 如何在R中使用getURL()优化抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10068350/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com