gpt4 book ai didi

python-3.x - 将大 Pandas df 保存到 hdf 时出现溢出错误

转载 作者:行者123 更新时间:2023-12-04 04:14:31 25 4
gpt4 key购买 nike

我有一个大型 Pandas 数据帧(~15GB,83m 行),我有兴趣将其保存为 h5(或 feather)文件。一列包含数字的长 ID 字符串,应具有字符串/对象类型。但即使我确保 Pandas 将所有列解析为 object :

df = pd.read_csv('data.csv', dtype=object)
print(df.dtypes) # sanity check
df.to_hdf('df.h5', 'df')

> client_id object
event_id object
account_id object
session_id object
event_timestamp object
# etc...

我收到此错误:
  File "foo.py", line 14, in <module>
df.to_hdf('df.h5', 'df')
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/core/generic.py", line 1996, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 279, in to_hdf
f(store)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 273, in <lambda>
f = lambda store: store.put(key, value, **kwargs)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 890, in put
self._write_to_group(key, value, append=append, **kwargs)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 1367, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 2963, in write
self.write_array('block%d_values' % i, blk.values, items=blk_items)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/pytables.py", line 2730, in write_array
vlarr.append(value)
File "/shared_directory/projects/env/lib/python3.6/site-packages/tables/vlarray.py", line 547, in append
self._append(nparr, nobjects)
File "tables/hdf5extension.pyx", line 2032, in tables.hdf5extension.VLArray._append
OverflowError: value too large to convert to int

显然它无论如何都试图将其转换为 int 并且失败了。

运行 df.to_feather() 时,我遇到了类似的问题:
df.to_feather('df.feather')
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/core/frame.py", line 1892, in to_feather
to_feather(self, fname)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pandas/io/feather_format.py", line 83, in to_feather
feather.write_dataframe(df, path)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/feather.py", line 182, in write_feather
writer.write(df)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/feather.py", line 93, in write
table = Table.from_pandas(df, preserve_index=False)
File "pyarrow/table.pxi", line 1174, in pyarrow.lib.Table.from_pandas
File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 501, in dataframe_to_arrays
convert_fields))
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 586, in result_iterator
yield fs.pop().result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 487, in convert_column
raise e
File "/shared_directory/projects/env/lib/python3.6/site-packages/pyarrow/pandas_compat.py", line 481, in convert_column
result = pa.array(col, type=type_, from_pandas=True, safe=safe)
File "pyarrow/array.pxi", line 191, in pyarrow.lib.array
File "pyarrow/array.pxi", line 78, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: ('Could not convert 1542852887489 with type str: tried to convert to double', 'Conversion failed for column session_id with type object')

所以:
  • 是什么看起来像数字的东西被强行转换成数字
    在存储?
  • NaN 的存在会影响这里发生的事情吗?
  • 有替代的存储解决方案吗?什么是最好的?
  • 最佳答案

    对这个主题做了一些阅读后,问题似乎是处理 string 类型的列。我的 string 列包含全数字字符串和带字符的字符串的混合。 Pandas 可以灵活地选择将字符串保留为 object ,没有声明类型,但是当序列化为 hdf5feather 时,该列的内容将转换为类型(例如 0x2313143 或 0x231343143)不能混合的类型(例如 0x2313431321313313133313313133133133131331331313313313131313131313131313131313131313131313131231231231231313131314131232323131313131313131313141 或 str当遇到足够大的混合类型库时,这两个库都失败了。

    将我的混合列强制转换为字符串允许我将它保存在 Feather 中,但在 HDF5 中,文件膨胀并且当我用完磁盘空间时该过程结束。

    Here 是一个类似案例的答案,评论者指出(2 年前)“这个问题非常标准,但解决方案很少”。

    一些背景:

    Pandas 中的字符串类型被称为 double ,但这掩盖了它们可能是纯字符串或 混合数据类型 (numpy 具有内置字符串类型,但 Pandas 从不使用它们)。所以在这种情况下要做的第一件事是将所有字符串 cols 强制为字符串类型(使用 object )。但即便如此,在足够大的文件(16GB,长字符串)中,这仍然失败。为什么?

    我遇到这个错误的原因是我有 高熵 字符串的数据(很多不同) (对于低熵数据,切换到 df[col].astype(str) dtype 可能是值得的。)在我的例子中,我意识到我只需要这些字符串来识别行 - 所以我可以用唯一的整数替换它们!

    df[col] = df[col].map(dict(zip(df[col].unique(), range(df[col].nunique()))))

    其他解决方案:

    对于文本数据,除了 categorical/ hdf5 之外,还有其他推荐的解决方案,包括:
  • feather
  • json(注意在 Pandas 0.25 中 msgpack 已弃用)
  • read_msgpack(已知 security issues ,所以要小心 - 但它应该可以用于内部存储/数据帧的传输)
  • pickle ,Apache Arrow 生态系统的一部分。

  • Here 是 Matthew Rocklin( parquet 开发人员之一)比较 daskmsgpack 的答案。他在他的 blog 上写了一个更广泛的比较。

    关于python-3.x - 将大 Pandas df 保存到 hdf 时出现溢出错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57078803/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com