gpt4 book ai didi

html - 使用 R 从包含超链接的网页中提取多个表

转载 作者:行者123 更新时间:2023-12-03 17:09:42 28 4
gpt4 key购买 nike

这是我第一次尝试网络抓取。我正在尝试从此网页中提取表格列表(列名:石油和天然气表格)。 Oil and Gas Data .使用该州的链接 Alabama Data 可以轻松提取该州的数据.但是,我想要一个可以提取所有州的数据的程序,并按 HTML 数据所示保持它们的年度。我已经加载了包 RCurl、XML、rlist 和 purrr 根据我以前遇到的类似帖子。

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?这个解决方案看起来很完整,但是,问题网页自发布以来可能已经改变(我试图模仿,但不能)

R: XPath expression returns links outside of selected element .我如何使用 Xpath 提取我需要的表,因为它们都包含“stateinitials_table.html”,如阿拉巴马州“al_table.html”view source

theurl <- getURL("https://www.eia.gov/naturalgas/archive/petrosystem/al_table.html",.opts = list(ssl.verifypeer = FALSE) )
tables <- readHTMLTable(theurl)
tables <- list.clean(tables, fun = is.null, recursive = FALSE)
berilium<-tables[seq(3,length(tables),2)]

这是“al_table.html”的输出,包含 15 年的 15 个数据框列表。
26 rows817 columns, 15 data frames 1 for each year

所以我需要,

制作一个函数(Xpath vs readHTMLTable,最好是 Xpath)从主 Web 链接中提取所有表。我需要它按州和年份“标记”,如网页所示。 (目前不关心清理无用的列和行)

最佳答案

这更像是一篇博客文章或教程,而不是一个 SO 答案,但我也可以理解学习的愿望,并且我也在为这个主题写一本书,这似乎是一个 gd 的例子。

library(rvest)
library(tidyverse)

我们将从顶级页面开始:
pg <- read_html("https://www.eia.gov/naturalgas/archive/petrosystem/petrosysog.html")

现在,我们将使用一个 XPath,它只为我们获取包含状态数据的表行。将 XPath 表达式与 HTML 中的标记进行比较,这应该是有意义的。查找所有 <tr>没有 colspan属性,只选择剩余的 <tr> s 具有正确的类和到状态的链接:
states <- html_nodes(pg, xpath=".//tr[td[not(@colspan) and 
contains(@class, 'links_normal') and a[@name]]]")

data_frame(
state = html_text(html_nodes(states, xpath=".//td[1]")),
link = html_attr(html_nodes(states, xpath=".//td[2]/a"), "href")
) -> state_tab

它位于数据框中,以保持整洁和方便。

您需要将下一位放在后面的函数下方,但我需要在显示函数之前解释迭代。

我们需要遍历每个链接。在每次迭代中,我们:
  • 暂停,因为您的需求并不比 EIA 的服务器负载更重要
  • 查找所有“分支”<div> s 因为它们拥有我们需要的两条信息(州+年和所述州+年的数据表)。
  • 将其全部封装在一个漂亮的数据框中

  • 我们不会把匿名函数弄乱,而是把这个功能放在另一个函数中(同样,需要在这个迭代器工作之前定义它):
    pb <- progress_estimated(nrow(state_tab))
    map_df(state_tab$link, ~{

    pb$tick()$print()

    pg <- read_html(sprintf("https://www.eia.gov/naturalgas/archive/petrosystem/%s", .x))

    Sys.sleep(5) # scrape responsibly

    html_nodes(pg, xpath=".//div[@class='branch']") %>%
    map_df(extract_table)

    }) -> og_df

    这是一群努力的人。我们需要找到页面上所有的 State + Year 标签(每个都在 <table> 中),然后我们需要找到其中包含数据的表格。我冒昧地删除了每个底部的解释性简介,并将每个都变成了 tibble (但这只是我的类(class)偏好):
    extract_table <- function(pg) {

    t1 <- html_nodes(pg, xpath=".//../tr[td[contains(@class, 'SystemTitle')]][1]")
    t2 <- html_nodes(pg, xpath=".//table[contains(@summary, 'Report')]")

    state_year <- (html_text(t1, trim=TRUE) %>% strsplit(" "))[[1]]

    xml_find_first(t2, "td[@colspan]") %>% xml_remove()

    html_table(t2, header=FALSE)[[1]] %>%
    mutate(state=state_year[1], year=state_year[2]) %>%
    tbl_df()

    }

    重新粘贴前面发布的代码只是为了确保你理解它必须在函数之后:
    pb <- progress_estimated(nrow(state_tab))
    map_df(state_tab$link, ~{

    pb$tick()$print()

    pg <- read_html(sprintf("https://www.eia.gov/naturalgas/archive/petrosystem/%s", .x))

    Sys.sleep(5) # scrape responsibly

    html_nodes(pg, xpath=".//div[@class='branch']") %>%
    map_df(extract_table)

    }) -> og_df

    而且,它有效(你说你会单独进行最后的清理):
    glimpse(og_df)
    ## Observations: 14,028
    ## Variables: 19
    ## $ X1 <chr> "", "Prod.RateBracket(BOE/Day)", "0 - 1", "1 - 2", "2 - 4", "4 - 6", "...
    ## $ X2 <chr> "", "||||", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|"...
    ## $ X3 <chr> "Oil Wells", "# ofOilWells", "26", "19", "61", "61", "47", "36", "250"...
    ## $ X4 <chr> "Oil Wells", "% ofOilWells", "5.2", "3.8", "12.1", "12.1", "9.3", "7.1...
    ## $ X5 <chr> "Oil Wells", "AnnualOilProd.(Mbbl)", "4.1", "7.8", "61.6", "104.9", "1...
    ## $ X6 <chr> "Oil Wells", "% ofOilProd.", "0.1", "0.2", "1.2", "2.1", "2.2", "2.3",...
    ## $ X7 <chr> "Oil Wells", "OilRateper Well(bbl/Day)", "0.5", "1.4", "3.0", "4.9", "...
    ## $ X8 <chr> "Oil Wells", "AnnualGasProd.(MMcf)", "1.5", "3.5", "16.5", "19.9", "9....
    ## $ X9 <chr> "Oil Wells", "GasRateper Well(Mcf/Day)", "0.2", "0.6", "0.8", "0.9", "...
    ## $ X10 <chr> "", "||||", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|"...
    ## $ X11 <chr> "Gas Wells", "# ofGasWells", "365", "331", "988", "948", "867", "674",...
    ## $ X12 <chr> "Gas Wells", "% ofGasWells", "5.9", "5.4", "16.0", "15.4", "14.1", "10...
    ## $ X13 <chr> "Gas Wells", "AnnualGasProd.(MMcf)", "257.6", "1,044.3", "6,360.6", "1...
    ## $ X14 <chr> "Gas Wells", "% ofGasProd.", "0.1", "0.4", "2.6", "4.2", "5.3", "5.4",...
    ## $ X15 <chr> "Gas Wells", "GasRateper Well(Mcf/Day)", "2.2", "9.2", "18.1", "30.0",...
    ## $ X16 <chr> "Gas Wells", "AnnualOilProd.(Mbbl)", "0.2", "0.6", "1.6", "2.0", "2.4"...
    ## $ X17 <chr> "Gas Wells", "OilRateper Well(bbl/Day)", "0.0", "0.0", "0.0", "0.0", "...
    ## $ state <chr> "Alabama", "Alabama", "Alabama", "Alabama", "Alabama", "Alabama", "Ala...
    ## $ year <chr> "2009", "2009", "2009", "2009", "2009", "2009", "2009", "2009", "2009"...

    关于html - 使用 R 从包含超链接的网页中提取多个表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47836889/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com