gpt4 book ai didi

python - BeautifulSoup/Python - 将 HTML 表格转换为 CSV 并获取一列的 href

转载 作者:太空狗 更新时间:2023-10-29 15:55:35 41 4
gpt4 key购买 nike

我正在用这段代码抓取一个 HTML 表格:

import csv
import urllib2
from bs4 import BeautifulSoup

with open('listing.csv', 'wb') as f:
writer = csv.writer(f)
for i in range(39):
url = "file:///C:/projects/HTML/Export.htm".format(i)
u = urllib2.urlopen(url)
try:
html = u.read()
finally:
u.close()
soup=BeautifulSoup(html)
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
row = [elem.text.encode('utf-8') for elem in tds]
writer.writerow(row)

一切正常,但我正在尝试获取第 9 列的 Href URL。它目前给我的是 txt 值,而不是 URL。

此外,我的 HTML 中有两个表格,无论如何要跳过第一个表格并只使用第二个表格构建 csv 文件?

非常欢迎任何帮助,因为我是 Python 的新手并且需要它用于我正在自动化每日转换的项目。

非常感谢!

最佳答案

您应该在第 8 个 td 标签中访问 a 标签的 href 属性:

import csv
import urllib2
from bs4 import BeautifulSoup

records = []
for index in range(39):
url = get_url(index) # where is the formatting in your example happening?
response = urllib2.urlopen(url)
try:
html = response.read()
except Exception:
raise
else:
my_parse(html)
finally:
try:
response.close()
except (UnboundLocalError, NameError):
raise UnboundLocalError

def my_parse(html):
soup = BeautifulSoup(html)
table2 = soup.find_all('table')[1]
for tr in table2.find_all('tr')[2:]:
tds = tr.find_all('td')
url = tds[8].a.get('href')
records.append([elem.text.encode('utf-8') for elem in tds])
# perhaps you want to update one of the elements of this last
# record with the found url now?

# It's more efficient to write only once
with open('listing.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(records)

我冒昧地定义了一个基于索引的函数 get_url,因为您的示例每次都会重新读取同一个文件,我猜您实际上并不想要这样。我会把实现留给你。此外,我还添加了一些更好的异常处理。

同时,我展示了如何从该网页的表格中访问第二个表格。

关于python - BeautifulSoup/Python - 将 HTML 表格转换为 CSV 并获取一列的 href,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27954764/

41 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com