gpt4 book ai didi

python - 如何在 python 中更快地操作大文件?

转载 作者:行者123 更新时间:2023-12-01 08:13:31 25 4
gpt4 key购买 nike

我必须循环遍历 30GB 的文件(其中有 30 个),而 500MB 大约需要 15 分钟。知道我正在逐行循环,如何优化性能?

Python

import json
import os

def file_subreddit_comments(rfname,wfname):
with open(rfname, 'r', encoding="utf8") as rf:
with open(wfname, 'w', encoding="utf-8") as wf:
for i, l in enumerate(rf):
d = json.loads(l)
link_id = d["link_id"]
for lsi in list_submission_id:
constructed_link_id = "t3_" + lsi
if link_id == constructed_link_id:
wf.write(l)

defaultFilePath = r'D:\Users\Jonathan\Desktop\Reddit Data\Run Comments\\'
directory = os.fsencode(defaultFilePath)

list_submission_id = []
submission_id_file = r'D:\Users\Jonathan\Desktop\Reddit Data\Manipulated Data-09-03-19-Final\UniqueIDSubmissionsList-09-03-2019.txt'
with open(submission_id_file, "r", encoding="utf8") as sif:
for i, l in enumerate(sif):
list_submission_id.append(l.rstrip())

for file in os.listdir(directory):
filename = os.fsdecode(file)
comment_path_read = defaultFilePath + filename
comment_path_save = defaultFilePath + filename + "_ext_com.txt"
file_subreddit_comments(comment_path_read,comment_path_save)
print(filename)

submission_id_file 是一个包含大约 1000 个关键字的列表,它需要验证每个关键字 constructed_link_id 的值是否在列表中。

最佳答案

多线程和多处理可能是 Thom 上面提出的解决方案。好吧,至少它减少了我执行任务的时间。 12 个核心 = 同时操作 12 个文件。

关于python - 如何在 python 中更快地操作大文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55088792/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com