gpt4 book ai didi

python - 如何在python中使用多处理将df的内容写入csv文件

转载 作者:行者123 更新时间:2023-12-02 00:39:01 28 4
gpt4 key购买 nike

我有一个函数可以将 df 的内容写入 csv 文件。

def writeToCSV(outDf, defFile, toFile, retainFlag=True, delim='\t', quotechar='"'):
headers = []
fid = open(defFile, 'r')
for line in fid:
headers.append(line.replace('\r','').split('\n')[0].split('\t')[0])
df = pd.DataFrame([], columns=headers)
for header in outDf.columns.values:
if header in headers:
df[header] = outDf[header]

df.to_csv(toFile, sep=delim, quotechar=quotechar, index=False, encoding='utf-8')

我怎样才能并行化这个过程?目前我正在使用以下代码

def writeToSchemaParallel(outDf, defFile, toFile, retainFlag=True, delim='\t', quotechar='"'):
logInfo('Start writingtoSchema in parallel...', 'track')
headers = []
fid = open(defFile, 'r')
for line in fid:
headers.append(line.replace('\r','').split('\n')[0].split('\t')[0])
df = pd.DataFrame([], columns=headers)
for header in outDf.columns.values:
if header in headers:
df[header] = outDf[header]
out_Names = Queue()
cores = min([int(multiprocessing.cpu_count() / 2), int(len(outDf) / 200000)+1])
#cores=4
logInfo(str(cores) + 'cores are used...', 'track')
# split the data for parallel computation
outDf = splitDf(df, cores)
# process the chunks in parallel
logInfo('splitdf called are df divided...', 'track')
Filenames=[]
procs = []
fname=toFile.split("_Opera_output")[0]
for i in range(0, cores):
filename=fname+"_"+str(i)+".tsv"
proc = Process(target=writeToSchema, args=(outDf[i], defFile,filename, retainFlag,delim, quotechar,i))
procs.append(proc)
proc.start()
print 'processing '+str(i)
Filenames.append(filename)
# combine all returned chunks
# outDf = out_Names.get()
# for i in range(1, cores):
# outDf = outDf.append(out_q.get(), ignore_index=True)
for proc in procs:
proc.join()
logInfo('Now we merge files...', 'track')
print Filenames
with open(toFile,'w') as outfile:
for fname in Filenames:
with open(fname) as infile:
for line in infile:
outfile.write(line)

但它没有工作并给出以下错误

2017-12-17 16:02:55,078 - track - ERROR: Traceback (most recent call last):
2017-12-17 16:02:55,078 - track - ERROR: File "C:/Users/sudhir.tiwari/Document
s/AMEX2/Workspace/Backup/Trunk/code/runMapping.py", line 257, in <module>
2017-12-17 16:02:55,089 - track - ERROR: writeToSchemaParallel(outDf, defFile, t
oFile, retainFlag, delim='\t', quotechar='"')
2017-12-17 16:02:55,153 - track - ERROR: File "C:\Users\sudhir.tiwari\Document
s\AMEX2\Workspace\Backup\Trunk\code\utils.py", line 510, in writeToSchemaParalle
l
2017-12-17 16:02:55,163 - track - ERROR: with open(fname) as infile:
2017-12-17 16:02:55,198 - track - ERROR: IOError
2017-12-17 16:02:55,233 - track - ERROR: :
2017-12-17 16:02:55,233 - track - ERROR: [Errno 2] No such file or directory: 'C
:/Users/sudhir.tiwari/Documents/AMEX2/Workspace/Input/work/Schindler_20171130/Sc
hindler_20171130_0.tsv'

而且它没有写入文件,因为当我搜索没有找到文件的位置时。我正在使用多处理将数据帧写入多个文件,然后合并所有文件。 split df 将dataframe分成n份。

最佳答案

使用多处理方式会比默认方式(直接保存)消耗更多时间。通过使用 Synchronization between processes , 使用ProcessesLock 并行写入进程。以下是 POC 示例。

import pandas as pd
import numpy as np
from multiprocessing import Lock, Process
from time import time

def writefile(df,l):
l.acquire()
df.to_csv('dataframe-multiprocessing.csv', index=False, mode='a', header=False)
l.release()


if __name__ == '__main__':
a = np.random.randint(1,1000,10000000)
b = np.random.randint(1,1000,10000000)
c = np.random.randint(1,1000,10000000)

df = pd.DataFrame(data={'a':a,'b':b,'c':c})

print('Iterative way:')
print()
new = time()
df.to_csv('dataframe-conventional.csv', index=False, mode='a', header=False)
print(time() - new, 'seconds')

print()
print('Multiprocessing way:')
print()
new = time()
l = Lock()
p = Process(target=writefile, args=(df,l))
p.start()
p.join()
print(time() - new, 'seconds')
print()

df1 = pd.read_csv('dataframe-conventional.csv')
df2 = pd.read_csv('dataframe-multiprocessing.csv')
print('If both file same or not:')
print(df1.equals(df2))

结果:

C:\Users\Ariff\Documents\GitHub\testing-code>python pandas_multip.py
Iterative way:

18.323541402816772 seconds

Multiprocessing way:

20.14128303527832 seconds

If both file same or not:
True

关于python - 如何在python中使用多处理将df的内容写入csv文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47859537/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com