gpt4 book ai didi

r - 尝试遍历 HTML 表格并创建数据框

转载 作者:行者123 更新时间:2023-12-02 04:38:02 26 4
gpt4 key购买 nike

我正在尝试创建一个动态循环来运行多个 URL 并从每个表中抓取数据,将所有内容连接到一个数据框中。我尝试了一些想法,如下图所示,但到目前为止没有任何效果。这种东西并不真正适合我的驾驶室,但我正在努力学习它是如何工作的。如果有人可以帮助我完成这项工作,我将不胜感激。

谢谢。

静态网址: http://www.nfl.com/draft/2015/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:qb

library(rvest)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c("2010", "2011", "2012", "2013", "2014", "2015")
positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
for (i in 1:length(yearsVector))
{
for (j in 1:length(positionVector))
{
# create a url template
URL.base<-"http://www.nfl.com/draft/"
URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
#create the dataframe with the dynamic values
URL <- paste0(URL.base, yearsVector, URL.intermediate, positionVector)
#print(URL)

#read the page - store the page to make debugging easier
page<- read_html(URL)

#This needs work since the page is dynamicly generated.
DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
#About 530 names returned, may need to search and extracted requested info.



# to find the players last names
lastnames<-str_locate_all(page, "lastName")[[1]]
names<- str_sub(page, lastnames[,2]+4, lastnames[,2]+20)
names<-str_extract(names, "[A-Z][a-zA-Z]*")

length(names[-c(1:16)])
#Still need to delete the first 16 names (don't know if this is consistent across all years

#to find the players positions
positions<-str_locate_all(page, "pos")[[1]]
ppositions<- str_sub(page, positions[,2]+4, positions[,2]+10)
pos<-str_extract(ppositions, "[A-Z]*")

pos<- pos[pos !=""]
#Still need to clean delete the first 16 names (don't know if this is consistent across all years


#store the temp values into the master dataframe
complete<-rbind(complete, DF)
}
}

我编辑了我的 OP 以合并您的代码 Dave。我想我快到了,但这里有些地方不太对劲。我收到此错误。

eval(substitute(expr), envir, enclos) 错误:需要一个值

我知道 URL 是正确的!

http://postimg.org/image/ccmvmnijr/

我认为问题在于这一行:

page <- read_html(URL)

或者,也许这一行:

DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill = TRUE)

你能帮我越过终点线吗?谢谢!

最佳答案

试试这个答案!我修复了 URL 的创建并设置了一个主数据框来存储请求的信息。该页面是动态生成的,因此使用这些来自 rvest 的标准工具是行不通的。所有的球员(大约16个领域),大学和选秀信息都存储在页面上,这是一个搜索和提取的问题。

library(rvest)
library(stringr)
library(dplyr)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c( "2011", "2012", "2013", "2014", "2015")
#all position information is stored on each page no need to create sparate queries
#positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
positionVector <- c("qb")
for (i in 1:length(yearsVector))
{
for (j in 1:length(positionVector))
{
# create a url template
URL.base<-"http://www.nfl.com/draft/"
URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
#create the dataframe with the dynamic values
URL <- paste0(URL.base, yearsVector[i], URL.intermediate, positionVector[j])
print(yearsVector[i])
print(URL)

#read the page - store the page to make debugging easier
page<- read_html(URL)

#This needs work since the page is dynamicly generated.
#DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
#About 539 names returned, may need to search and extracted requested info.
#find records for each player
playersloc<-str_locate_all(page, "\\{\"personId.*?\\}")[[1]]
players<-str_sub(page, playersloc[,1]+1, playersloc[,2]-1)
#fix the cases where the players are named Jr.
players<-gsub(", ", "_", players )

#split and reshape the data in a data frame
play2<-strsplit(gsub("\"", "", players), ',')
data<-sapply(strsplit(unlist(play2), ":"), FUN=function(x){x[2]})
df<-data.frame(matrix(data, ncol=16, byrow=TRUE))
#name the column names
names(df)<-sapply(strsplit(unlist(play2[1]), ":"), FUN=function(x){x[1]})

#sort out the pick information
picks<-str_locate_all(page, "\\{\"id.*?player.*?\\}")[[1]]
picks<-str_sub(page, picks[,1]+1, picks[,2]-1)
#fix the cases where there are commas in the notes section.
picks<-gsub(", ", "_", picks )
picks<-strsplit(gsub("\"", "", picks), ',')
data<-sapply(strsplit(unlist(picks), ":"), FUN=function(x){x[2]})
picksdf<-data.frame(matrix(data, ncol=6, byrow=TRUE))
names(picksdf)<-sapply(strsplit(unlist(picks[1]), ":"), FUN=function(x){x[1]})

#sort out the college information
schools<-str_locate_all(page, "\\{\"id.*?name.*?\\}")[[1]]
schools<-str_sub(page, schools[,1]+1, schools[,2]-1)
schools<-strsplit(gsub("\"", "", schools), ',')
data<-sapply(strsplit(unlist(schools), ":"), FUN=function(x){x[2]})
schoolsdf<-data.frame(matrix(data, ncol=3, byrow=TRUE))
names(schoolsdf)<-sapply(strsplit(unlist(schools[1]), ":"), FUN=function(x){x[1]})

#merge the 3 tables together
df<-left_join(df, schoolsdf, by=c("college" = "id"))
df<-left_join(df, picksdf, by=c("pick" = "id"))

#store the temp values into the master dataframe
complete<-rbind(complete, df)
}
}

找出正确的正则表达式来查找和定位所需信息非常棘手。看起来 2010 年的数据使用不同的格式使用大学信息,因此我忽略了那一年。另外,请确保您没有违反本网站的服务条款。祝你好运

关于r - 尝试遍历 HTML 表格并创建数据框,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40264390/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com