gpt4 book ai didi

python - 无法以自定义方式对输出进行分层

转载 作者:行者123 更新时间:2023-12-04 00:49:14 25 4
gpt4 key购买 nike

我创建了一个脚本来解析 htmlfile link 中的几个数据点。并根据 this format 将其写入 csv 文件.

我确实使用我已经在脚本中定义的选择器相应地定位了字段,但我无法以正确的方式对输出进行分层,以便稍后将它们写入 csv 文件。

数据点位置:

Nature of association

`from 1st table`
Purpose
In cash (Previous balance)

`from 2nd table`
Donor Name
Address

`from 3rd table`
Country Name
Amount

这是我尝试过的方法(我想 html 文件链接有效):

import requests
from bs4 import BeautifulSoup

file_link = 'https://filebin.redpill-linpro.com/zj2qqc27va5fatm0/index.html'

res = requests.get(file_link)
soup = BeautifulSoup(res.text,"lxml")
nature_of_asso = soup.select_one("td:contains('Nature of association') + td").get_text(strip=True)

for purpose_tr in soup.select("table:has(> tr > td:nth-of-type(1) + td:contains('Purpose')) tr")[3:]:
try:
purpose = purpose_tr.select_one('td:nth-of-type(2)').get_text(strip=True)
except AttributeError: purpose = ""
try:
in_cash = purpose_tr.select_one('td:nth-of-type(3)').get_text(strip=True)
except AttributeError: in_cash = ""
print(purpose,in_cash)

for donor_tr in soup.select("table:has(> tr > td:nth-of-type(1) + td:contains('Donor Name')) tr")[2:]:
try:
donor_name = donor_tr.select_one('td:nth-of-type(2)').get_text(strip=True)
except AttributeError: donor_name = ""
try:
address = donor_tr.select_one('td:nth-of-type(3)').get_text(strip=True)
except AttributeError: address = ""
print(donor_name,address)

for country_tr in soup.select("table:has(> tr > td:nth-of-type(1) + td:contains('Country Name')) tr")[1:]:
try:
country = country_tr.select_one('td:nth-of-type(2)').get_text(strip=True)
except AttributeError: country = ""
try:
amount = country_tr.select_one('td:nth-of-type(3)').get_text(strip=True)
except AttributeError: amount = ""
print(country,amount)

How can I arrange output as per the image above in order to write the same to a csv file?

最佳答案

您可以使用 pandas 来处理所有事情并清理表格,然后使用 Sl.No 左连接主 DataFrame,在其他行上连接大多数行。

import pandas as pd

tables = pd.read_html('https://filebin.redpill-linpro.com/zj2qqc27va5fatm0/index.html')
df = tables[4]
df = df.iloc[2:-1, :3]
df.columns = df.iloc[0, :]
df.drop(labels = 2, axis = 0, inplace = True)

df_donor = tables[8]
df_donor = df_donor.iloc[:-2, :]
df_donor.columns = df_donor.iloc[0, :]
df_donor = df_donor.iloc[2:, :3]

df_country = tables[10]
df_country = df_country.iloc[:-1, :]
df_country.columns = df_country.iloc[0, :]
df_country = df_country.iloc[1:, :]

df.rename(columns = {'Sl.No.':'Sl.No'}, inplace = True)
df = pd.merge(df, df_donor, on = df.columns[0], how = 'left')
df = pd.merge(df, df_country, on = df.columns[0], how = 'left')
df = df.iloc[:, 1:]
df.insert(loc = 0, column= 'Nature of association', value = '')

df_association = tables[2]
association = df_association[df_association[0].str.contains('Nature of association')].iloc[:, 1].item()

df.iloc[0,0] = association
print(df)

如果你想更确定地定位正确的表,那么引入 BeautifulSoup:-soup-contains 来定位正确的表:

import pandas as pd
import requests
from bs4 import BeautifulSoup as bs

r = requests.get('https://filebin.redpill-linpro.com/zj2qqc27va5fatm0/index.html')
soup = bs(r.content, 'lxml')

df = pd.read_html(str(soup.select_one('table:-soup-contains("Sl.No.")')))[0]
df_donor = pd.read_html(str(soup.select_one('table:-soup-contains("Donor Name")')))[0]
df_association = pd.read_html(str(soup.select_one('table:-soup-contains("Association details")')))[0]
df_country = pd.read_html(str(soup.select_one('table:-soup-contains("Country Name")')))[0]

df = df.iloc[2:-1, :3]
df.columns = df.iloc[0, :]
df.drop(labels = 2, axis = 0, inplace = True)

df_donor = df_donor.iloc[:-2, :]
df_donor.columns = df_donor.iloc[0, :]
df_donor = df_donor.iloc[2:, :3]

df_country = df_country.iloc[:-1, :]
df_country.columns = df_country.iloc[0, :]
df_country = df_country.iloc[1:, :]

df.rename(columns = {'Sl.No.':'Sl.No'}, inplace = True)
df = pd.merge(df, df_donor, on = df.columns[0], how = 'left')
df = pd.merge(df, df_country, on = df.columns[0], how = 'left')
df = df.iloc[:, 1:]
df.insert(loc = 0, column= 'Nature of association', value = '')

association = df_association[df_association[0].str.contains('Nature of association')].iloc[:, 1].item()

df.iloc[0,0] = association
print(df)

然后您可以根据需要按列处理 NaN,并使用 pandas.DataFrame.to_csv 方法写入 csv。


您当然可以单独使用 BeautifulSoup 来完成大部分工作,但是您需要检索 Sl.No 以便在连接时启用输出的行匹配结果(给定当前 css 选择器的不同数量的结果)。


如果删除列、行比子集更有效/更不有效,这可能值得研究。

关于python - 无法以自定义方式对输出进行分层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67856475/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com