gpt4 book ai didi

python - 从网页上抓取 2 个不同格式的表格 - Beautiful Soup

转载 作者:行者123 更新时间:2023-12-01 09:15:41 26 4
gpt4 key购买 nike

所以我的目标是从网站上抓取 2 个表格(不同格式)- FSC Public Search在对许可证代码列表进行迭代之后。我的问题是,因为我想要的两个表(产品数据和证书数据)采用两种不同的格式,所以我必须分别抓取它们。例如,产品数据在网页上采用正常的“tr”格式,而证书数据采用“div”格式。

根据我提出的上一个问题,我几乎已经解决了我的问题,并且我可以在一系列许可证代码上完全正常地检索证书数据(“div”形式)。但是我无法按照我的意愿输出产品数据表。它没有显示 5 个许可证代码的产品数据,而是显示第一个许可证代码的 5 个副本。我尝试将此抓取放入定义的函数 get_data_by_code 中,但仍然无法以我想要的格式获取它,这只是 CSV 文件中的表格。

基本上,我不确定在我的函数/脚本中的何处包含此抓取,因此任何输入将不胜感激,谢谢。

df3 = pd.DataFrame()
df = pd.read_csv("MS_License_Codes.csv")
codes = df["License Code"]
data = [
('code', code),
('submit', 'Search'),
]
response = requests.post('https://info.fsc.org/certificate.php', data=data)
soup = BeautifulSoup(response.content, 'lxml')



def get_data_by_code(code):
data = [
('code', code),
('submit', 'Search'),
]

response = requests.post('https://info.fsc.org/certificate.php', data=data)
soup = BeautifulSoup(response.content, 'lxml')


#scraping the certificate data

status = soup.find_all("label", string="Status")[0].find_next_sibling('div').text
first_issue_date = soup.find_all("label", string="First Issue Date")[0].find_next_sibling('div').text
last_issue_date = soup.find_all("label", string="Last Issue Date")[0].find_next_sibling('div').text
expiry_date = soup.find_all("label", string="Expiry Date")[0].find_next_sibling('div').text
standard = soup.find_all("label", string="Standard")[0].find_next_sibling('div').text


return [code, status, first_issue_date, last_issue_date, expiry_date, standard]

# Just insert here output filename and codes to parse...
OUTPUT_FILE_NAME = 'Certificate_Data.csv'

df3 = pd.DataFrame()

with open(OUTPUT_FILE_NAME, 'w') as f:
writer = csv.writer(f)
for code in codes:
print('Getting code# {}'.format(code))
writer.writerow((get_data_by_code(code)))

##attempting to scrape the product data

table = soup.find_all('table')[0]
df1, = pd.read_html(str(table))
df3 = df3.append(df1)

df3.to_csv('Product_Data.csv', index = False, encoding='utf-8')

编辑

因此,使用下面的代码,我获得了最后一个许可证代码的产品数据的 5 个副本..稍微接近一点,但我仍然不明白为什么会出现这种情况

df3 = pd.DataFrame()
for code in codes:
print('Getting code# {}'.format(code))
response = requests.post('https://info.fsc.org/certificate.php', data=data)
soup = BeautifulSoup(response.content, 'lxml')

table = soup.find_all('table')[0]
df1, = pd.read_html(str(table))
df3 = df3.append(df1)

df3.to_csv('Product_Data.csv', index = False, encoding='utf-8')

编辑2

我一直在使用的示例代码:

codes = ['FSC-C001777', 'FSC-C124838' ,'FSC-C068163','FSC-C101537','FSC-C005776']

格式编辑

这是正确的表格格式,但正如您所看到的,它是来自重复 5 次的第一个许可证代码的信息,而不是唯一的数据。

Product Data

这是我想要的格式和信息,这里一切正常: Certificate Data

最佳答案

对于您提供的代码,这种简化的方法应该足够了。它只是直接从 BeautifulSoup 中提取必要的信息,而不需要使用 Pandas 来尝试提取它:

from bs4 import BeautifulSoup
import requests
import csv

fieldnames_cert = ['Code', 'Status', 'First Issue Date', 'Last Issue Date', 'Expiry Date', 'Standard']
fieldnames_prod = ['Code', 'Product Type', 'Trade Name', 'Species', 'Primary Activity', 'Secondary Activity', 'Main Output Category']

codes = ['FSC-C001777', 'FSC-C124838', 'FSC-C068163', 'FSC-C101537', 'FSC-C005776']

with open('Certificate_Data.csv', 'wb') as f_output_cert, \
open('Product_Data.csv', 'wb') as f_output_prod:

csv_output_cert = csv.writer(f_output_cert)
csv_output_cert.writerow(fieldnames_cert)

csv_output_prod = csv.writer(f_output_prod)
csv_output_prod.writerow(fieldnames_prod)

for code in codes:
print('Getting code# {}'.format(code))
response = requests.post('https://info.fsc.org/certificate.php', data={'code' : code, 'submit' : 'Search'})
soup = BeautifulSoup(response.content, 'lxml')

# Extract the certificate data
div_cert = soup.find('div', class_='certificatecl')
csv_output_cert.writerow([code] + [div.text for div in div_cert.find_all('div')])

# Extract the product data
table = soup.find('h2', id='products').find_next_sibling('table')

for tr in table.find_all('tr')[1:]:
row = [td.get_text(strip=True).encode('utf-8') for td in tr.find_all('td')]
csv_output_prod.writerow([code] + row)

这将生成包含以下内容的 Certificate_Data.csv:

Code,Status,First Issue Date,Last Issue Date,Expiry Date,Standard
FSC-C001777,Valid,2009-04-01,2018-02-16,2019-04-01,FSC-STD-40-004 V3-0
FSC-C124838,Valid,2015-03-23,2015-03-23,2020-03-22,FSC-STD-40-004 V3-0
FSC-C068163,Valid,2010-03-01,2017-08-23,2022-08-22,FSC-STD-40-003 V2-1;FSC-STD-40-004 V3-0
FSC-C101537,Valid,2010-10-01,2013-11-28,2018-11-27,FSC-STD-40-003 V2-1;FSC-STD-40-004 V3-0
FSC-C005776,Valid,2007-07-17,2017-07-17,2022-07-16,FSC-STD-40-004 V3-0

并生成包含以下内容的 Product_Data.csv:

Code,Product Type,Trade Name,Species,Primary Activity,Secondary Activity,Main Output Category
FSC-C001777,W12 Indoor furnitureW12.4 Beds,,,Secondary Processor,Secondary Processor,FSC Mix
FSC-C124838,"W18 Other manufactured wood productsW18.4 Tools, tool bodies and handles",, Abies spp; Betula spp.; Fagus sylvatica L.; Hevea brasiliensis; Paulownia tomentosa (Thunb. ex Murr) Steud; Picea spp.; Populus spp.; Quercus spp; Schima wallichii (DC.) Korth.; Swietenia macrophylla; Tilia spp.; Ulmus spp.,brokers/traders with physical posession,,FSC Mix;FSC 100%;FSC Recycled
FSC-C068163,P2 Paper,,,brokers/traders with physical posession,Distributor/Wholesaler,FSC Mix;FSC 100%;FSC Recycled
FSC-C068163,P3 Paperboard,,,brokers/traders with physical posession,Distributor/Wholesaler,FSC Mix;FSC 100%;FSC Recycled
FSC-C101537,P8 Printed materials,,,Printing and related service,Secondary Processor,FSC Mix;FSC 100%;FSC Recycled
FSC-C101537,P7 Stationery of paper,,,Printing and related service,Secondary Processor,FSC Mix;FSC 100%;FSC Recycled
FSC-C005776,W12 Indoor furnitureW12.10 Cupboards and chests,"Outros produtos, (baú, quadro espelho, etc.)", Eucalyptus spp; Pinus spp.,Secondary Processor,,FSC Mix
FSC-C005776,W12 Indoor furnitureW12.7 Office furniture,"Produtos para escritório (escrivaninha, mesa, gaveteiros, etc.)", Eucalyptus spp; Pinus elliottii,Secondary Processor,,FSC Mix
FSC-C005776,W12 Indoor furnitureW12.12 Parts of furniture,"Partes de movéis, (peças de reposição)", Eucalyptus spp; Pinus taeda,Secondary Processor,,FSC Mix
FSC-C005776,W12 Indoor furnitureW12.4 Beds,Camas, Eucalyptus spp; Pinus taeda,Secondary Processor,,FSC Mix

关于python - 从网页上抓取 2 个不同格式的表格 - Beautiful Soup,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51284741/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com