gpt4 book ai didi

perl - 为什么 Perl 的 Archive::Tar 内存不足?

转载 作者:行者123 更新时间:2023-12-04 17:42:00 26 4
gpt4 key购买 nike

我正在使用下面的 Perl 代码来列出 tar 存档中的文件。 tar 存档的大小始终约为 15MB。

my $file = shift;
my $tar = Archive::Tar->new("$file");
my @lists = $tar->list_files;
$tar->error unless @lists;

执行此代码给我一个错误“内存不足”。我的 Linux 系统大约有 512MB,我不想增加系统的内存。任何人都可以建议我是否可以修改此代码以获得更好的性能或其他代码来列出 tar 存档中的文件。

最佳答案

来自 Archive::Tar FAQ :

Isn't Archive::Tar slow? Yes it is. It's pure perl, so it's a lot slower then your /bin/tar However, it's very portable. If speed is an issue, consider using /bin/tar instead.

Isn't Archive::Tar heavier on memory than /bin/tar? Yes it is, see previous answer. Since Compress::Zlib and therefore IO::Zlib doesn't support seek on their filehandles, there is little choice but to read the archive into memory. This is ok if you want to do in-memory manipulation of the archive.

If you just want to extract, use the extract_archive class method instead. It will optimize and write to disk immediately.

Another option is to use the iter class method to iterate over the files in the tarball without reading them all in memory at once.



所以基于上面那么这应该是解决方案(未经测试):
my $next = Archive::Tar->iter( $file );

while ( my $f = $next->() ) {
say $f->name;
}

/I3az/

关于perl - 为什么 Perl 的 Archive::Tar 内存不足?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/1706565/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com