gpt4 book ai didi

python - 拆分列并写入单独的输出文件

转载 作者:行者123 更新时间:2023-12-01 03:56:57 25 4
gpt4 key购买 nike

我有一个包含 8 列和大约 500 万行的数据集。文件大小超过 400 MB。我正在尝试分隔列。文件扩展名为 .dat,各列以一个空格 分隔。

输入:

00022d3f5b17 00022d9064bc 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00022dba8f51 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00022de1c6c1 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 003065f30f37 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00904b48a3b6 1073260801 1073260803 819251 440006 819251 440006
00022d9064bc 00904b83a0ea 1073260803 1073260810 819213 439954 819213 439954
00904b4557d3 00904b85d3cf 1073260803 1073261920 817526 439458 817526 439458
00022de73863 00904b14b494 1073260804 1073265410 817558 439525 817558 439525

代码:

import pandas as pd 

df = pd.read_csv('sorted.dat', sep=' ', header=None, names=['id_1', 'id_2', 'time_1', 'time_2', 'gps_1', 'gps_2', 'gps_3', 'gps_4'])

#print df

df.to_csv('output_1.csv', columns = ['id_1', 'time_1', 'time_2', 'gps_1', 'gps_2'])

df.to_csv('output_2.csv', columns = ['id_2', 'time_1', 'time_2', 'gps_3', 'gps_4'])

输出将是一个包含 col[1]、col[3]、col[4]、col[5]、col[6] 的文件和另一个包含 col[2] 的输出]、col[3]、col[4]、col[7]、col[8]

我收到此错误

Traceback (most recent call last):
File "split_col_pandas.py", line 3, in <module>
df = pd.read_csv('dartmouthsorted.dat', sep=' ', header=None, names=['id_1', 'id_2', 'time_1', 'time_2', 'gps_1', 'gps_2', 'gps_3', 'gps_4'])
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 325, in _read
return parser.read()
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 823, in read
df = DataFrame(col_dict, columns=columns, index=index)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 224, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 360, in _init_dict
return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 5241, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3999, in create_block_manager_from_arrays
blocks = form_blocks(arrays, names, axes)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4076, in form_blocks
int_blocks = _multi_blockify(int_items)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4145, in _multi_blockify
values, placement = _stack_arrays(list(tup_block), dtype)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 4188, in _stack_arrays
stacked = np.empty(shape, dtype=dtype)
MemoryError

最佳答案

试试这个:

columns = ['id_1', 'time_1', 'time_2', 'gps_1', 'gps_2']
df[columns].to_csv('output_1.csv')

columns = ['id_2', 'time_1', 'time_2', 'gps_3', 'gps_4']
df[columns].to_csv('output_2.csv')

另外,请查看这篇关于 Python 中内存错误的文章: Memory errors and list limits?

更新编辑

帖子作者还要求在保存两个新的 csv 文件后,重新组合 output_1.csv 和 output_2.csv,以便 id_1id_2在同一列中,gps_1gps_3 成为单列,gps_2gps_4 成为单列。

有很多方法可以做到这一点,但这是一种方法(选择可读性而不是效率):

columns = ['id_merged', 'time_1', 'time_2', 'gps_1or3', 'gps_2or4']
df1 = pd.read_csv('output_1.csv', names=columns, skiprows=1)
df2 = pd.read_csv('output_2.csv', names=columns, skiprows=1)

df = pd.concat([df1, df2]) # your final dataframe

这样做的一个潜在问题是,您最终会在某些地方得到 null 值,因此需要适本地处理它们,否则您会抛出错误,而且新的 存在危险>id_merged 列将有重复的键,但这是另一个问题的问题......

有关更新的更多信息,请参阅有关联接、连接和合并的文档:http://pandas.pydata.org/pandas-docs/stable/merging.html

关于python - 拆分列并写入单独的输出文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37287926/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com