gpt4 book ai didi

python - 在Python中从第n行读取大型CSV文件(不是从头开始)

转载 作者:太空宇宙 更新时间:2023-11-03 15:41:40 27 4
gpt4 key购买 nike

我有 3 个巨大的 CSV 文件,其中包含气候数据,每个文件约 5GB。每行的第一个单元格是气象站的编号(从 0 到大约 100,000),每个站在每个文件中包含 1 到 800 行,不一定在所有文件中都相同。例如,Station 11 在 file1、file2 和 file3 中分别有 600、500 和 200 行。我想读取每个站的所有行,对它们进行一些操作,然后将结果写入另一个文件,然后写入下一个站,等等。这些文件太大,无法一次加载到内存中,因此我尝试了一些解决方案以最小的内存负载读取它们,例如 this postthis post其中包括这个方法:

with open(...) as f:
for line in f:
<do something with line>

这种方法的问题是每次都从头读取文件,而我想按如下方式读取文件:

for station in range (100798):
with open (file1) as f1, open (file2) as f2, open (file3) as f3:
for line in f1:
st = line.split(",")[0]
if st == station:
<store this line for some analysis>
else:
break # break the for loop and go to read the next file
for line in f2:
...
<similar code to f1>
...
for line in f3:
...
<similar code to f1>
...
<do the analysis to station, the go to next station>

问题是,每次我重新开始下一站时,for循环都会从头开始,而我希望它从第n行发生“Break”的地方开始,即继续读取文件.

我该怎么做?

提前致谢

关于以下解决方案的注释:正如我在发布答案时在下面提到的,我实现了 @DerFaizio 的答案,但我发现它的处理速度非常慢。

在尝试了 @PM_2Ring 提交的基于生成器的答案后,我发现它非常非常快。也许是因为它取决于生成器。

两种解决方案之间的差异可以从每分钟处理站的数量看出,基于发电机的解决方案为 2500 st/min,基于 Pandas 的解决方案为 45 st/min。其中基于生成器的解决方案速度>55倍

我将保留下面的两个实现以供引用。非常感谢所有贡献者,特别是@PM_2Ring。

最佳答案

下面的代码逐行迭代文件,依次从每个文件中获取每个站点的行,并将它们附加到列表中以供进一步处理。

这段代码的核心是一个生成器file_buff,它生成文件的行,但它允许我们将行推回以供以后读取。当我们读取下一站的线路时,我们可以将其发送回 file_buff ,以便我们可以在处理该站的线路时重新读取它。

为了测试此代码,我使用 create_data 创建了一些简单的假站数据。

from random import seed, randrange

seed(123)

station_hi = 5
def create_data():
''' Fill 3 files with fake station data '''
fbase = 'datafile_'
for fnum in range(1, 4):
with open(fbase + str(fnum), 'w') as f:
for snum in range(station_hi):
for i in range(randrange(1, 4)):
s = '{1} data{0}{1}{2}'.format(fnum, snum, i)
print(s)
f.write(s + '\n')
print()

create_data()

# A file buffer that you can push lines back to
def file_buff(fh):
prev = None
while True:
while prev:
yield prev
prev = yield prev
prev = yield next(fh)

# An infinite counter that yields numbers converted to strings
def str_count(start=0):
n = start
while True:
yield str(n)
n += 1

# Extract station data from all 3 files
with open('datafile_1') as f1, open('datafile_2') as f2, open('datafile_3') as f3:
fb1, fb2, fb3 = file_buff(f1), file_buff(f2), file_buff(f3)

for snum_str in str_count():
station_lines = []
for fb in (fb1, fb2, fb3):
for line in fb:
#Extract station number string & station data
sid, sdata = line.split()
if sid != snum_str:
# This line contains data for the next station,
# so push it back to the buffer
rc = fb.send(line)
# and go to the next file
break
# Otherwise, append this data
station_lines.append(sdata)

#Process all the data lines for this station
if not station_lines:
#There's no more data to process
break
print('Station', snum_str)
print(station_lines)

输出

0 data100
1 data110
1 data111
2 data120
3 data130
3 data131
4 data140
4 data141

0 data200
1 data210
2 data220
2 data221
3 data230
3 data231
3 data232
4 data240
4 data241
4 data242

0 data300
0 data301
1 data310
1 data311
2 data320
3 data330
4 data340

Station 0
['data100', 'data200', 'data300', 'data301']
Station 1
['data110', 'data111', 'data210', 'data310', 'data311']
Station 2
['data120', 'data220', 'data221', 'data320']
Station 3
['data130', 'data131', 'data230', 'data231', 'data232', 'data330']
Station 4
['data140', 'data141', 'data240', 'data241', 'data242', 'data340']

如果一个或两个文件中某个特定站点的站点数据丢失,此代码可以处理,但如果所有三个文件都丢失该站点数据则不行,因为当 station_lines 列表为空,但这对您的数据来说应该不是问题。

<小时/>

有关生成器和 generator.send 方法的详细信息,请参阅 6.2.9. Yield expressions在文档中。

此代码是使用 Python 3 开发的,但它也可以在 Python 2.6+ 上运行(您只需在脚本顶部包含 from __future__ import print_function 即可)。

<小时/>

如果所有 3 个文件中可能缺少电台 ID,我们可以轻松处理。只需使用简单的 range 循环而不是无限的 str_count 生成器。

from random import seed, randrange

seed(123)

station_hi = 7
def create_data():
''' Fill 3 files with fake station data '''
fbase = 'datafile_'
for fnum in range(1, 4):
with open(fbase + str(fnum), 'w') as f:
for snum in range(station_hi):
for i in range(randrange(0, 2)):
s = '{1} data{0}{1}{2}'.format(fnum, snum, i)
print(s)
f.write(s + '\n')
print()

create_data()

# A file buffer that you can push lines back to
def file_buff(fh):
prev = None
while True:
while prev:
yield prev
prev = yield prev
prev = yield next(fh)

station_start = 0
station_stop = station_hi

# Extract station data from all 3 files
with open('datafile_1') as f1, open('datafile_2') as f2, open('datafile_3') as f3:
fb1, fb2, fb3 = file_buff(f1), file_buff(f2), file_buff(f3)

for i in range(station_start, station_stop):
snum_str = str(i)
station_lines = []
for fb in (fb1, fb2, fb3):
for line in fb:
#Extract station number string & station data
sid, sdata = line.split()
if sid != snum_str:
# This line contains data for the next station,
# so push it back to the buffer
rc = fb.send(line)
# and go to the next file
break
# Otherwise, append this data
station_lines.append(sdata)

if not station_lines:
continue
print('Station', snum_str)
print(station_lines)

输出

1 data110
3 data130
4 data140

0 data200
1 data210
2 data220
6 data260

0 data300
4 data340
6 data360

Station 0
['data200', 'data300']
Station 1
['data110', 'data210']
Station 2
['data220']
Station 3
['data130']
Station 4
['data140', 'data340']
Station 6
['data260', 'data360']

关于python - 在Python中从第n行读取大型CSV文件(不是从头开始),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42063281/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com