gpt4 book ai didi

python-3.x - 将 BigQuery 表导出到 Google 存储时如何避免 header

转载 作者:行者123 更新时间:2023-12-02 16:56:32 28 4
gpt4 key购买 nike

我开发了以下代码,有助于将 BigQuery 表导出到 Google 存储桶。我想将文件合并到带有 out header 的单个文件中,以便下一个进程将使用 out 文件而不会出现任何问题。

    def export_bq_table_to_gcs(self, table_name):
client = bigquery.Client(project=project_name)

print("Exporting table {}".format(table_name))
dataset_ref = client.dataset(dataset_name,
project=project_name)
dataset = bigquery.Dataset(dataset_ref)
table_ref = dataset.table(table_name)
size_bytes = client.get_table(table_ref).num_bytes

# For tables bigger than 1GB uses Google auto split, otherwise export is forced in a single file.
if size_bytes > 10 ** 9:
destination_uris = [
'gs://{}/{}{}*.csv'.format(bucket_name,
f'{table_name}_temp', uid)]
else:
destination_uris = [
'gs://{}/{}{}.csv'.format(bucket_name,
f'{table_name}_temp', uid)]

extract_job = client.extract_table(table_ref, destination_uris) # API request
result = extract_job.result() # Waits for job to complete.

if result.state != 'DONE' or result.errors:
raise Exception('Failed extract job {} for table {}'.format(result.job_id, table_name))
else:
print('BQ table(s) export completed successfully')
storage_client = storage.Client(project=gs_project_name)
bucket = storage_client.get_bucket(gs_bucket_name)
blob_list = bucket.list_blobs(prefix=f'{table_name}_temp')
print('Merging shard files into single file')
bucket.blob(f'{table_name}.csv').compose(blob_list)

你能帮我找到一种跳过标题的方法吗。

谢谢,

拉古纳特。

最佳答案

我们可以通过使用 jobConfig 将 print_header 参数设置为 False 来避免 header 。示例代码

job_config = bigquery.job.ExtractJobConfig(print_header=False)
extract_job = client.extract_table(table_ref, destination_uris,
job_config=job_config)

谢谢

关于python-3.x - 将 BigQuery 表导出到 Google 存储时如何避免 header ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56161185/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com