gpt4 book ai didi

Python如何使用分割定界符删除csv上的空格

转载 作者:行者123 更新时间:2023-12-01 07:54:18 26 4
gpt4 key购买 nike

我正在编写一段代码,将 html 表格转换为 csv 文件。我无法弄清楚如何删除使用字符串分割打印到终端的信息之间的空白。我最好的结果是终端打印的信息之间存在很大的间隙,这使得导航变得困难。任何信息将不胜感激。

import csv
from bs4 import BeautifulSoup
from termcolor import cprint

html = open("recallist.html").read()
soup = BeautifulSoup(html)
table = soup.find_all('div', {'id': 'PrintArea'})
output_rows = []
recals = 'recallist.csv'
cprint('READING TABLES', 'green')
for table_row in table:
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text)
output_rows.append(output_row)
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerows(output_rows)
with open(recals, 'r') as f:
contents = f.read()
for item in contents.split("Date,Customer,Phone,Cell Phone,Removal,Notes"):
for refine in item.split('",,'):
print(refine)

下面列出了 CSV 示例:

Location,,,Date,Customer,Phone,Cell Phone,Removal,Notes,�,�,�,,04/29/19 | 03:00 PM,[9999] FIRST LAST,999-999-9999***,999-999-9999,,"
",,"
","

$127.92
",,04/29/19 | 03:30 PM,[123456] FIRST LAST,999-999-9999***,999-999-9999,04/13/2020,"
",,"
","

$0.02
",,04/29/19 | 04:00 PM,[123456] FIRST LAST,999-999-9999***,,09/10/2019,"
",,"
","

($212.10)
",,04/29/19 | 04:15 PM,[123456] FIRST LAST,999-999-9999***,,01/09/2020,"
",,"
","

$16.23
",,04/29/19 | 04:30 PM,[123456] FIRST LAST,999-999-9999***,,05/30/2019,"
",,"
","

$0.24
",,04/29/19 | 05:00 PM,[123456] FIRST LAST,999-999-9999***,,07/26/2019,"
",,"
","

($0.30)
",,04/29/19 | 07:00 PM,[123456] FIRST LAST,999-999-9999***,999-999-9999,11/15/2019,"
",,"
","

$0.06
",,04/29/19 | 07:30 PM,[123456] FIRST LAST,999-999-9999***,,12/12/2019,"
",,"
","

我想要实现的格式:

04/29/19 | 03:00 PM,[9999] FIRST LAST,999-999-9999***,999-999-9999,$127.92
04/29/19 | 03:30 PM,[99999] FIRST LAST,999-999-9999***,999-999-9999,$0.02
ETC.

html 示例(如果需要):

<tbody><tr class="alt">
<td colspan="5" align="left" style="background-color:668cd9;">Location</td>
<td colspan="5" align="left" style="background-color:668cd9;"></td>
</tr>
<tr align="left" class="GrayBLOCK">
<td></td>
<td>Date</td>
<td>Customer</td>
<td>Phone</td>
<td>Cell Phone</td>
<td>Removal</td>
<td>Notes</td>
<td> </td>
<td> </td>
<td> </td>
</tr>












<tr class="alt">
<td></td>
<td>04/29/19 | 03:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[9999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td></td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

$127.92
</td>
</tr>











<tr>
<td></td>
<td>04/29/19 | 03:30 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td>04/13/2020</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

$0.02
</td>
</tr>











<tr class="alt">
<td></td>
<td>04/29/19 | 04:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>09/10/2019</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

($212.10)
</td>
</tr>











<tr>
<td></td>
<td>04/29/19 | 04:15 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>01/09/2020</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

$16.23
</td>
</tr>











<tr class="alt">
<td></td>
<td>04/29/19 | 04:30 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>05/30/2019</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

$0.24
</td>
</tr>











<tr>
<td></td>
<td>04/29/19 | 05:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td></td>
<td>07/26/2019</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

($0.30)
</td>
</tr>











<tr class="alt">
<td></td>
<td>04/29/19 | 07:00 PM</td>
<td><a href="../code/c_newClient.cfm?theID=99999" target="_blank">[999999]</a> FIRST LAST</td>
<td>999-999-9999***</td>
<td>999-999-9999</td>
<td>11/15/2019</td>
<td>

</td>
<td></td>
<td>
</td>
<td align="right" class="RedMED">

$0.06
</td>
</tr>

最佳答案

更新:我在原来的帖子中发现了一个问题,这里有更好的版本。空<td>标签创建一些额外的列。版本 1 保留了这些列,版本 2 删除了它们,但它非常特定于您给定的格式,如果格式更改,则必须修改切片。

版本 1

import csv
from bs4 import BeautifulSoup

with open("recallist.html") as f:
soup = BeautifulSoup(f.read(), features="html.parser")

rows = soup.find_all('tr')
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
for row in rows:
columns = row.find_all('td')
writer.writerow([column.get_text(strip=True) for column in columns])

版本 2

import csv
from bs4 import BeautifulSoup

with open("recallist.html") as f:
soup = BeautifulSoup(f.read(), features="html.parser")

rows = soup.find_all('tr')
with open('recallist.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
#alt: 'for row in rows[2:]:' to slice off the two header rows
for row in rows:
columns = row.find_all('td')
del columns[0]
del columns[-4:-1]
writer.writerow([column.get_text(strip=True) for column in columns])

如果您的真实 HTML 实际上有多个包含各种列的表,则需要对此进行调整。希望对您有帮助!

关于Python如何使用分割定界符删除csv上的空格,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56049827/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com