gpt4 book ai didi

python - 如何在 google Bigquery 中使用加载作业时创建日期分区表?

转载 作者:行者123 更新时间:2023-12-05 08:50:51 27 4
gpt4 key购买 nike

有人可以解释如何使用 JobConfig 在 google Bigquery 中使用加载作业时创建日期分区表吗?

https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_a_partitioned_table_when_loading_data

我看不懂文档,如果有人能用例子解释会很有帮助。

编辑:所以我想感谢@irvifa,我想出了这个对象,但我仍然无法创建一个 TimePartitioned 表,这是我正在尝试使用的代码。

import pandas
from google.cloud import bigquery


def load_df(self, df):
project_id="ProjectID"
dataset_id="Dataset"
table_id="TableName"
table_ref=project_id+"."+dataset_id+"."+table_id
time_partitioning = bigquery.table.TimePartitioning(field="PartitionColumn")
job_config = bigquery.LoadJobConfig(
schema="Schema",
destinationTable=table_ref
write_disposition="WRITE_TRUNCATE",
timePartitioning=time_partitioning
)
Job = Client.load_table_from_dataframe(df, table_ref,
job_config=job_config)
Job.result()

最佳答案

我不知道它是否有帮助,但您可以使用以下示例来加载带分区的作业:

from datetime import datetime, time
from concurrent import futures
import math
from pathlib import Path
from google.cloud import bigquery

def run_query(self, query_job_config):
time_partitioning = bigquery.table.TimePartitioning(field="partition_date")
job_config = bigquery.QueryJobConfig()
job_config.destination = query_job_config['destination_dataset_table']
job_config.time_partitioning = time_partitioning
job_config.use_legacy_sql = False
job_config.allow_large_results = True
job_config.write_disposition = 'WRITE_APPEND'
sql = query_job_config['sql']
query_job = self.client.query(sql, job_config=job_config)
query_job.result()

关于python - 如何在 google Bigquery 中使用加载作业时创建日期分区表?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61132608/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com