gpt4 book ai didi

r - 使用 Rcrawler 在 R 中进行网络爬行

转载 作者:行者123 更新时间:2023-12-03 08:49:28 26 4
gpt4 key购买 nike

我正在尝试获取页面标题为“https://www.federalreserve.gov/newsevents/speeches.htm”的链接中指定的演讲。

例如,页面上的第一个标题是“自发性与秩序:银行监管的透明度、问责性和公平性”,如果点击它,就会引导相应的演讲。

有人可以告诉我如何使用 Rcrawler 下载所有这些带有标题和日期的演讲吗?

谢谢贾拉杰

最佳答案

一个问题要涵盖很多内容,但这是一个有趣的问题,所以我想无论如何我都会尝试一下。这就是结果。

Tidyverse/rvest 版本

首先,我将在 Tidyverse 中构建这个抓取工具,因为我熟悉使用它进行网络抓取。因此,我们将从加载必需的包开始。

library(tidyverse)
library(rvest)

这个问题的一个具有挑战性的方面是没有一个页面包含所有演讲页面的链接。然而,如果我们从主页上抓取链接,我们会发现有一组链接指向包含任何一年的所有演讲的页面。需要明确的是,我没有在主页上看到这些链接。相反,我通过抓取主页发现了它们;使用 html_nodes("a") 查看类型“a”的节点因为 Chrome 中的检查告诉我这是找到相关链接的地方;使用 html_attr("href") 从这些结果中提取网址,然后在控制台中查看结果,看看哪些内容看起来有用。在这些结果中,我看到了表单 "newsevents/speech2020-speeches.htm" 的链接。和"newsevents/speech2007speeches.htm" ,当我对这些链接运行相同的过程时,我发现我得到了个人演讲的链接。所以:

# scrape the main page
base_page <- read_html("https://www.federalreserve.gov/newsevents/speeches.htm")

# extract links to those annual archives from the resulting html
year_links <- base_page %>%
html_nodes("a") %>%
html_attr("href") %>%
# the pattern for those annual pages changes, so we can use this approach to get both types
map(c("/newsevents/speech/[0-9]{4}-speeches.htm", "/newsevents/speech/[0-9]{4}speech.htm"), str_subset, string = .) %>%
reduce(union)

# here's what that produces
> year_links
[1] "/newsevents/speech/2020-speeches.htm" "/newsevents/speech/2019-speeches.htm" "/newsevents/speech/2018-speeches.htm" "/newsevents/speech/2017-speeches.htm"
[5] "/newsevents/speech/2016-speeches.htm" "/newsevents/speech/2015-speeches.htm" "/newsevents/speech/2014-speeches.htm" "/newsevents/speech/2013-speeches.htm"
[9] "/newsevents/speech/2012-speeches.htm" "/newsevents/speech/2011-speeches.htm" "/newsevents/speech/2010speech.htm" "/newsevents/speech/2009speech.htm"
[13] "/newsevents/speech/2008speech.htm" "/newsevents/speech/2007speech.htm" "/newsevents/speech/2006speech.htm"

好的,现在我们将使用 map 来抓取这些年度页面,以获取指向各个演讲页面的链接。对各个链接迭代过程。

speech_links <- map(year_links, function(x) {

# the scraped links are incomplete, so we'll start by adding the missing bit
full_url <- paste0("https://www.federalreserve.gov", x)

# now we'll essentially rerun the process we ran on the main page, only now we can
# focus on a single string pattern, which again I found by trial and error (i.e.,
# scrape the page, look at the hrefs on it, see which ones look relevant, check
# one out in my browser to confirm, then use str_subset() to get ones matching that pattern
speech_urls <- read_html(full_url) %>%
html_nodes("a") %>%
html_attr("href") %>%
str_subset(., "/newsevents/speech/")

# add the header now
return(paste0("https://www.federalreserve.gov", speech_urls))

})

# unlist the results so we have one long vector of links to speeches instead of a list
# of vectors of links
speech_links <- unlist(speech_links)

# here's what the results of that process look like
> head(speech_links)
[1] "https://www.federalreserve.gov/newsevents/speech/quarles20200117a.htm" "https://www.federalreserve.gov/newsevents/speech/bowman20200116a.htm"
[3] "https://www.federalreserve.gov/newsevents/speech/clarida20200109a.htm" "https://www.federalreserve.gov/newsevents/speech/brainard20200108a.htm"
[5] "https://www.federalreserve.gov/newsevents/speech/brainard20191218a.htm" "https://www.federalreserve.gov/newsevents/speech/brainard20191126a.htm"

现在,最后,我们将在各个演讲的页面上迭代抓取过程,以制作包含每个演讲的关键元素的小标题:日期、标题、演讲者、位置和全文。我找到了每个所需元素的节点类型,方法是在 Chrome 浏览器中打开其中一个演讲的页面,右键单击(我在 Windows 计算机上),然后使用“检查”查看与该演讲关联的 html各种位。

speech_list <- map(speech_links, function(x) {

Z <- read_html(x)

# scrape the date and convert it to 'date' class while we're at it
date <- Z %>% html_nodes("p.article__time") %>% html_text() %>% as.Date(., format = "%B %d, %Y")

title <- Z %>% html_nodes("h3.title") %>% html_text()

speaker <- Z %>% html_nodes("p.speaker") %>% html_text()

location <- Z %>% html_nodes("p.location") %>% html_text()

# this one's a little more involved because the text at that node had two elements,
# of which we only wanted the second, and I went ahead and cleaned up the speech
# text a bit here to make the resulting column easy to work with later
text <- Z %>%
html_nodes("div.col-xs-12.col-sm-8.col-md-8") %>%
html_text() %>%
.[2] %>%
str_replace_all(., "\n", "") %>%
str_trim(., side = "both")

return(tibble(date, title, speaker, location, text))

})

# finally, bind the one-row elements of that list into a single tibble
speech_table <- bind_rows(speech_list)

以下是其产生的内容,涵盖 2006 年至今的 804 次美联储演讲:

> str(speech_table)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 804 obs. of 5 variables:
$ date : Date, format: "2020-01-17" "2020-01-16" "2020-01-09" "2020-01-08" ...
$ title : chr "Spontaneity and Order: Transparency, Accountability, and Fairness in Bank Supervision" "The Outlook for Housing" "U.S. Economic Outlook and Monetary Policy" "Strengthening the Community Reinvestment Act by Staying True to Its Core Purpose" ...
$ speaker : chr "Vice Chair for Supervision Randal K. Quarles" "Governor Michelle W. Bowman" "Vice Chair Richard H. Clarida" "Governor Lael Brainard" ...
$ location: chr "At the American Bar Association Banking Law Committee Meeting 2020, Washington, D.C." "At the 2020 Economic Forecast Breakfast, Home Builders Association of Greater Kansas City, Kansas City, Missouri" "At the C. Peter McColough Series on International Economics, Council on Foreign Relations, New York, New York" "At the Urban Institute, Washington, D.C." ...
$ text : chr "It's a great pleasure to be with you today at the ABA Banking Law Committee's annual meeting. I left the practi"| __truncated__ "Few sectors are as central to the success of our economy and the lives of American families as housing. If we i"| __truncated__ "Thank you for the opportunity to join you bright and early on this January 2020 Thursday morning. As some of yo"| __truncated__ "Good morning. I am pleased to be here at the Urban Institute to discuss how to strengthen the Community Reinves"| __truncated__ ...

Rcrawler版本

现在,您特别询问了如何使用 Rcrawler 执行此操作包,不是rvest ,所以这是使用前者的解决方案。

我们将从使用 Rcrawler 开始的LinkExtractor使用正则表达式的函数可以按年份抓取包含演讲链接的页面的 URL。请注意,我只知道在正则表达式中查找什么,因为我已经浏览了 html 来获取 rvest解决方案。

library(Rcrawler)

year_links = LinkExtractor("https://www.federalreserve.gov/newsevents/speeches.htm",
urlregexfilter = "https://www.federalreserve.gov/newsevents/speech/")

现在我们可以使用lapply迭代LinkExtractor根据该过程的结果,每年抓取个别演讲的批量链接。同样,我们将使用正则表达式来集中抓取,并且我们只知道在正则表达式中使用什么模式,因为我们已经观察了上一步的结果并在浏览器中查看了其中一些页面。

speech_links <- lapply(year_links$InternalLinks, function(i) {

linkset <- LinkExtractor(i, urlregexfilter = "speech/[a-z]{1,}[0-9]{8}a.htm")

# might as well limit the results to the vector of interest while we're here
return(linkset$InternalLinks)

})

# that process returns a list of vectors, so let's collapse that list into one
# long vector of urls for pages with individual speeches
speech_links <- unlist(speech_links)

最后,我们可以应用ContentScraper函数到各个语音链接的结果向量以提取数据。检查其中一个页面的 html 揭示了与感兴趣的位相关的 CSS 模式,因此我们将使用 CssPatterns捕获这些位和 PatternsName给他们起好听的名字。该调用返回一个数据列表列表,因此我们将使用 do.call(rbind.data.frame, ...) 将该列表列表转换为单个数据帧来完成。与 stringsAsFactors = FALSE以避免将所有内容都转换为因子。

DATA <- ContentScraper(Url = speech_links,
CssPatterns = c(".article__time", ".location", ".speaker", ".title", ".col-xs-12.col-sm-8.col-md-8"),
PatternsName = c("date", "location", "speaker", "title", "text"),
# we need this next line to get both elements for the .col-xs-12.col-sm-8.col-md-8
# bit, which is the text of the speech itself. the first element
# is just a repeat of the header info
ManyPerPattern = TRUE)

# because the text element is a vector of two strings, we'll want to flatten the
# results into a one-row data frame to make the final concatenation easier. this
# gives us a row with two cols for text, text1 and text2, where text2 is the part
# you really want
DATA2 <- lapply(DATA, function(i) { data.frame(as.list(unlist(i)), stringsAsFactors = FALSE) })

# finally, collapse those one-row data frames into one big data frame, one row per speech
output <- do.call(rbind.data.frame, c(DATA2, stringsAsFactors = FALSE))

这里需要注意三件事:1)这个表只有 779 行,而我们用 rvest 得到的表只有 779 行。有806,不知道为什么有差异; 2)此表中的数据仍然是原始数据,可以进行一些清理(例如,将日期字符串转换为类 date ,整理文本列中的字符串),您可以使用 sapply 来完成; 3)您可能想要删除多余的 text1列,您可以在基本 R 中使用 output$text1 <- NULL 执行此操作.

关于r - 使用 Rcrawler 在 R 中进行网络爬行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59875844/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com