gpt4 book ai didi

python - 运行 df.to_csv() 时出现 Dask 内存错误

转载 作者:太空宇宙 更新时间:2023-11-03 21:18:57 24 4
gpt4 key购买 nike

我正在尝试索引并保存无法加载到内存中的大型 csv。我用于加载 csv、执行计算并按新值建立索引的代码可以正常工作。简化版本是:

cluster = LocalCluster(n_workers=6, threads_per_worker=1)
client = Client(cluster, memory_limit='1GB')

df = dd.read_csv(filepath, header=None, sep=' ', blocksize=25e7)
df['new_col'] = df.map_partitions(lambda x: some_function(x))
df = df.set_index(df.new_col, sorted=False)

但是,当我使用大文件(即 > 15gb)时,在将数据帧保存为 csv 时遇到内存错误:

df.to_csv(os.path.join(save_dir, filename+'_*.csv'), index=False, chunksize=1000000)

我尝试设置 chunksize=1000000 看看这是否有帮助,但没有效果。

完整的堆栈跟踪是:

Traceback (most recent call last):
File "/home/david/data/pointframes/examples/dask_z-order.py", line 44, in <module>
calc_zorder(fp, save_dir)
File "/home/david/data/pointframes/examples/dask_z-order.py", line 31, in calc_zorder
df.to_csv(os.path.join(save_dir, filename+'_*.csv'), index=False, chunksize=1000000)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/core.py", line 1159, in to_csv
return to_csv(self, filename, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/io/csv.py", line 654, in to_csv
delayed(values).compute(scheduler=scheduler)
File "/usr/local/lib/python2.7/dist-packages/dask/base.py", line 156, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/base.py", line 398, in compute
results = schedule(dsk, keys, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/threaded.py", line 76, in get
pack_exception=pack_exception, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/dask/local.py", line 459, in get_async
raise_exception(exc, tb)
File "/usr/local/lib/python2.7/dist-packages/dask/local.py", line 230, in execute_task
result = _execute_task(task, data)
File "/usr/local/lib/python2.7/dist-packages/dask/core.py", line 118, in _execute_task
args2 = [_execute_task(a, cache) for a in args]
File "/usr/local/lib/python2.7/dist-packages/dask/core.py", line 119, in _execute_task
return func(*args2)
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/shuffle.py", line 426, in collect
res = p.get(part)
File "/usr/local/lib/python2.7/dist-packages/partd/core.py", line 73, in get
return self.get([keys], **kwargs)[0]
File "/usr/local/lib/python2.7/dist-packages/partd/core.py", line 79, in get
return self._get(keys, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/partd/encode.py", line 30, in _get
for chunk in raw]
File "/usr/local/lib/python2.7/dist-packages/partd/pandas.py", line 175, in deserialize
for (h, b) in zip(headers[2:], bytes[2:])]
File "/usr/local/lib/python2.7/dist-packages/partd/pandas.py", line 136, in block_from_header_bytes
copy=True).reshape(shape)
File "/usr/local/lib/python2.7/dist-packages/partd/numpy.py", line 126, in deserialize
result = result.copy()
MemoryError

我正在 python 2.7 中的 Ubuntu 18.04 系统上运行 dask v1.1.0。我的电脑内存是32GB。对于无论如何都可以放入内存的小文件,此代码按预期工作,但对于较大的文件则不然。我在这里缺少什么吗?

最佳答案

我鼓励您尝试较小的数据 block 。您应该在计算的 read_csv 部分而不是 to_csv 部分控制这一点。

关于python - 运行 df.to_csv() 时出现 Dask 内存错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54459056/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com