gpt4 book ai didi

python - 如何在使用 python 抓取 wikitable 时处理 rowspan?

转载 作者:行者123 更新时间:2023-12-05 06:43:00 26 4
gpt4 key购买 nike

我正在尝试抓取存储在维基百科页面 https://en.wikipedia.org/wiki/Minister_of_Agriculture_(India) 表格中的数据.但是,我无法抓取存储在 rowspan Hers 中的完整数据,这是我到目前为止所写的:

from bs4 import BeautifulSoup
from urllib.request import urlopen

wiki = urlopen("https://en.wikipedia.org/wiki/Minister_of_Agriculture_(India)")

soup = BeautifulSoup(wiki, "html.parser")

table = soup.find("table", { "class" : "wikitable" })
for row in table.findAll("tr"):
cells = row.findAll("td")

if cells:
name = cells[0].find(text=True)
pic = cells[1].find("img")
strt = cells[2].find(text=True)
end = cells[3].find(text=True)
pri = cells[6].find(text=True)

z=name+'\n'+pic+'\n'+strt+'\n'+end+'\n'+pri
print z

最佳答案

这是本题的唯一解。这里我将rowspan, colspan table改成simple table。我在这个问题上浪费了很多天,但没有找到简单而好的解决方案。在许多 stackoverflow 解决方案中,开发人员只抓取文本。但就我而言,我还需要 url 链接。所以,我写了这段代码。这对我有用

# this code written in beautifulsoup python3.5
# fetch one wikitable in html format with links from wikipedia
from bs4 import BeautifulSoup
import requests
import codecs
import os

url = "https://en.wikipedia.org/wiki/Ministry_of_Agriculture_%26_Farmers_Welfare"

fullTable = '<table class="wikitable">'

rPage = requests.get(url)
soup = BeautifulSoup(rPage.content, "lxml")

table = soup.find("table", {"class": "wikitable"})

rows = table.findAll("tr")
row_lengths = [len(r.findAll(['th', 'td'])) for r in rows]
ncols = max(row_lengths)
nrows = len(rows)

# rows and cols convert list of list
for i in range(len(rows)):
rows[i]=rows[i].findAll(['th', 'td'])


# Header - colspan check in Header
for i in range(len(rows[0])):
col = rows[0][i]
if (col.get('colspan')):
cSpanLen = int(col.get('colspan'))
del col['colspan']
for k in range(1, cSpanLen):
rows[0].insert(i,col)


# rowspan check in full table
for i in range(len(rows)):
row = rows[i]
for j in range(len(row)):
col = row[j]
del col['style']
if (col.get('rowspan')):
rSpanLen = int(col.get('rowspan'))
del col['rowspan']
for k in range(1, rSpanLen):
rows[i+k].insert(j,col)


# create table again
for i in range(len(rows)):
row = rows[i]
fullTable += '<tr>'
for j in range(len(row)):
col = row[j]
rowStr=str(col)
fullTable += rowStr
fullTable += '</tr>'

fullTable += '</table>'

# table links changed
fullTable = fullTable.replace('/wiki/', 'https://en.wikipedia.org/wiki/')
fullTable = fullTable.replace('\n', '')
fullTable = fullTable.replace('<br/>', '')

# save file as a name of url
page=os.path.split(url)[1]
fname='outuput_{}.html'.format(page)
singleTable = codecs.open(fname, 'w', 'utf-8')
singleTable.write(fullTable)



# here we can start scraping in this table there rowspan and colspan table changed to simple table
soupTable = BeautifulSoup(fullTable, "lxml")
urlLinks = soupTable.findAll('a');
print(urlLinks)

# and so on .............

关于python - 如何在使用 python 抓取 wikitable 时处理 rowspan?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35098857/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com