gpt4 book ai didi

node.js - 尝试创建大型 csv 文件时内存不足

转载 作者:太空宇宙 更新时间:2023-11-03 23:26:34 27 4
gpt4 key购买 nike

使用 Meteor 并在服务器上尝试通过循环 Meteor 集合并插入一行来生成大型 csv 文件。在某些时候,服务器会出现内存不足错误 - 我的猜测是我在循环完成之前耗尽内存,具体取决于集合大小。我该如何解决这个问题(以某种方式清除内存)?代码如下:

    var job = Jobs.findOne();
var fs = Npm.require('fs');
var file = '/tmp/csv-' + job._id + '.csv';
var headers = ["Email", "Processed?", "Integration", "Passed?", "Reason", "Date"];
var stream = fs.createWriteStream(file);
var first_line = headers.join() + '\n';
var wstream = fs.createWriteStream(file);
var emails = rawEmails.find();
wstream.write(first_line);
emails.forEach(function(rawemail) {
var line_item = [];
line_item.push(rawemail.email);
if (rawemail.processed === true || rawemail.processed === false)
line_item.push(rawemail.processed);
if (rawemail.integration)
line_item.push(rawemail.integration);
if (rawemail.passed === true || rawemail.passed === false)
line_item.push(rawemail.passed);
if (rawemail.reason)
line_item.push(rawemail.reason);
if (rawemail.updated_at)
line_item.push(rawemail.updated_at);
var to_write = line_item.join() + '\n';
wstream.write(to_write);
});
wstream.end();

最佳答案

var emails = rawEmails.find();

不太好。您需要限制、分页并将大量记录写入文件

var skip = 0
var emails = rawEmails.find({}, {limit: 100, skip: skip})

while (emails) {
// write to buffer
skip = (++skip) * 100
emails = rawEmails.find({}, {limit: 100, skip: skip})
}

注意,如果记录数量过多, Node 进程的 writeStream 也会消耗大量内存,从而再次出现 out of memory 异常。考虑写入多个文件并将其压缩发送回客户端(如果客户端想要下载它)

关于node.js - 尝试创建大型 csv 文件时内存不足,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43508384/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com