gpt4 book ai didi

python - 如何使用 Python 清理大型格式错误的 CSV 文件

转载 作者:太空宇宙 更新时间:2023-11-04 05:54:40 26 4
gpt4 key购买 nike

我正在尝试使用 Python 2.7.5 清理格式错误的 CSV 文件。 CSV 文件相当大(超过 1GB)。文件的第一行正确列出了列标题,但之后每个字段都在一个新行上(除非它是空白的)并且一些字段是多行的。多行字段没有用引号括起来,但在输出中需要用引号括起来。列数是静态的并且是已知的。所提供示例输入中的模式在整个文件长度内重复。

输入文件(示例):

Hostname,Username,IP Addresses,Timestamp,Test1,Test2,Test3
my_hostname
,my_username
,10.0.0.1
192.168.1.1
,2015-02-11 13:41:54 -0600
,,true
,false
my_2nd_hostname
,my_2nd_username
,10.0.0.2
192.168.1.2
,2015-02-11 14:04:41 -0600
,true
,,false

期望的输出:

Hostname,Username,IP Addresses,Timestamp,Test1,Test2,Test3
my_hostname,my_username,"10.0.0.1 192.168.1.1",2015-02-11 13:41:54 -0600,,true,false
my_2nd_hostname,my_2nd_username,"10.0.0.2 192.168.1.2",2015-02-11 14:04:41 -0600,true,,false

我走了几条路,解决了其中一个问题,结果发现它无法处理畸形数据的另一个方面。如果有人可以帮助我确定清理此文件的有效方法,我将不胜感激。

谢谢

编辑

我有几个不同路径的代码片段,但这是当前的迭代。它并不漂亮,只是一堆尝试解决这个问题的技巧。

import csv

inputfile = open('input.csv', 'r')
outputfile_1 = open('output.csv', 'w')

counter = 1
for line in inputfile:
#Skip header row
if counter == 1:
outputfile_1.write(line)
counter = counter + 1
else:
line = line.replace('\r', '').replace('\n', '')
outputfile_1.write(line)

inputfile.close()
outputfile_1.close()

with open('output.csv', 'r') as f:
text = f.read()

comma_count = text.count(',') #comma_count/6 = total number of rows

#need to insert a newline after the field contents after every 6th comma
#unfortunately the last field of the row and the first field of the next row are now rammed up together becaue of the newline replaces above...
#then process as normal CSV

#one path I started to go down... but this isn't even functional
groups = text.split(',')

counter2 = 1
while (counter2 <= comma_count/6):
line = ','.join(groups[:(6*counter2)]), ','.join(groups[(6*counter2):])
print line
counter2 = counter2 + 1

编辑 2

感谢@DSM 和@Ryan Vincent 让我走上正轨。使用他们的想法,我编写了以下代码,这似乎更正了我格式错误的 CSV。不过,我敢肯定还有很多地方需要改进,我很乐意接受。

import csv
import re

outputfile_1 = open('output.csv', 'wb')
wr = csv.writer(outputfile_1, quoting=csv.QUOTE_ALL)

with open('input.csv', 'r') as f:
text = f.read()
comma_indices = [m.start() for m in re.finditer(',', text)] #Find all the commas - the fields are between them

cursor = 0
field_counter = 1
row_count = 0
csv_row = []

for index in comma_indices:
newrowflag = False

if "\r" in text[cursor:index]:
#This chunk has two fields, the last of one row and first of the next
next_field=text[cursor:index].split('\r')
next_field_trimmed = next_field[0].replace('\n',' ').rstrip().lstrip()
csv_row.extend([next_field_trimmed]) #Add the last field of this row

#Reset the cursor to be in the middle of the chuck (after the last field and before the next)
#And set a flag that we need to start the next csvrow before we move on to the next comma index
cursor = cursor+text[cursor:index].index('\r')+1
newrowflag = True
else:
next_field_trimmed = text[cursor:index].replace('\n',' ').rstrip().lstrip()
csv_row.extend([next_field_trimmed])

#Advance the cursor to the character after the comma to start the next field
cursor = index + 1

#If we've done 7 fields then we've finished the row
if field_counter%7==0:
row_count = row_count + 1
wr.writerow(csv_row)

#Reset
csv_row = []

#If the last chunk had 2 fields in it...
if newrowflag:
next_field_trimmed = next_field[1].replace('\n',' ').rstrip().lstrip()
csv_row.extend([next_field_trimmed])
field_counter = field_counter + 1

field_counter = field_counter + 1
#Write the last row
wr.writerow(csv_row)

outputfile_1.close()

# Process output.csv as normal CSV file...

最佳答案

这是关于我将如何解决这个问题的评论。

对于每一行:

我可以轻松识别某些组的开始和结束:

  • 主机名 - 只有一个
  • 用户名 - 阅读这些直到你遇到一些看起来不像用户名的东西(逗号分隔)
  • ip 地址 - 阅读这些直到你遇到一个时间戳 - 用模式匹配识别 - 请注意这些是用空格而不是逗号分隔的。组的结尾由尾随逗号标识。
  • 时间戳 - 易于通过模式匹配识别
  • test1、test2、test3 - 一定会以逗号分隔字段的形式出现

注意:我会使用“模式”匹配来确定我在正确的位置有正确的东西。它可以更快地发现错误。

关于python - 如何使用 Python 清理大型格式错误的 CSV 文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28467606/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com