gpt4 book ai didi

pandas - 带分区 dask read_parquet 目录的目录通配

转载 作者:行者123 更新时间:2023-12-05 03:38:56 28 4
gpt4 key购买 nike

我有一个用 pandas/pyarrow 编写的分区气象站读数目录。

c.to_parquet(path=f"data/{filename}.parquet", engine='pyarrow', compression='snappy', partition_cols=['STATION', 'ELEMENT'])

当我尝试使用 glob 和谓词下推子句读回一些文件时,如下所示

ddf= dd.read_parquet("data/*.parquet", engine='pyarrow', gather_statistics=True, filters=[('STATION', '==', 'CA008202251'), ('ELEMENT', '==', 'TAVG')], columns=['TIME','ELEMENT','VALUE', 'STATION'])

我得到一个索引错误

IndexError                                Traceback (most recent call last)
<timed exec> in <module>

/usr/local/lib/python3.9/site-packages/dask/dataframe/io/parquet/core.py in read_parquet(path, columns, filters, categories, index, storage_options, engine, gather_statistics, split_row_groups, read_from_paths, chunksize, aggregate_files, **kwargs)
314 gather_statistics = True
315
--> 316 read_metadata_result = engine.read_metadata(
317 fs,
318 paths,

/usr/local/lib/python3.9/site-packages/dask/dataframe/io/parquet/arrow.py in read_metadata(cls, fs, paths, categories, index, gather_statistics, filters, split_row_groups, read_from_paths, chunksize, aggregate_files, **kwargs)
540 split_row_groups,
541 gather_statistics,
--> 542 ) = cls._gather_metadata(
543 paths,
544 fs,

/usr/local/lib/python3.9/site-packages/dask/dataframe/io/parquet/arrow.py in _gather_metadata(cls, paths, fs, split_row_groups, gather_statistics, filters, index, dataset_kwargs)
1786
1787 # Step 1: Create a ParquetDataset object
-> 1788 dataset, base, fns = _get_dataset_object(paths, fs, filters, dataset_kwargs)
1789 if fns == [None]:
1790 # This is a single file. No danger in gathering statistics

/usr/local/lib/python3.9/site-packages/dask/dataframe/io/parquet/arrow.py in _get_dataset_object(paths, fs, filters, dataset_kwargs)
1740 if proxy_metadata:
1741 dataset.metadata = proxy_metadata
-> 1742 elif fs.isdir(paths[0]):
1743 # This is a directory. We can let pyarrow do its thing.
1744 # Note: In the future, it may be best to avoid listing the

IndexError: list index out of range

我可以单独加载 parquet 目录

ddf= dd.read_parquet("data/2000.parquet", engine='pyarrow', gather_statistics=True, filters=[('STATION', '==', 'CA008202251'), ('ELEMENT', '==', 'TAVG')], columns=['TIME','ELEMENT','VALUE', 'STATION'])

dask/parquet/pyarrow 读取是否可以通配?

最佳答案

.to_parquet 中使用 partition_cols 时,分区数据帧保存在单独的文件中,因此 data/2000.parquet 在您的情况下是可能是一个文件夹。

import pandas as pd
from os.path import isdir

# test dataframe
df = pd.DataFrame(range(3), columns=['a'])
df['b'] = df['a']
df['c'] = df['a']

# save without partitioning
df.to_parquet('test.parquet')
print(isdir('test.parquet')) # False

# save with partitioning
df.to_parquet('test_partitioned.parquet', partition_cols=['a', 'b'])
print(isdir('test_partitioned.parquet')) # True

作为解决此问题的方法,使用 os.walkglob 构造一个显式的 parquet 文件列表可能是一个很好的解决方案。请注意,如果有多个分区列,则会有多个包含 parquet 文件的嵌套文件夹,因此简单的 glob 是不够的,您需要进行递归搜索。

或者,可以为每一年构造 dask.dataframes,然后将它们与 dd.concat 连接起来。

关于pandas - 带分区 dask read_parquet 目录的目录通配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68765564/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com