gpt4 book ai didi

python - bigquery 数据流错误 : Cannot read and write in different locations while reading and writing in EU

转载 作者:太空宇宙 更新时间:2023-11-04 08:38:31 24 4
gpt4 key购买 nike

我有一个简单的 Google DataFlow 任务。它从 BigQuery 表读取并写入另一个表,就像这样:

(p
| beam.io.Read( beam.io.BigQuerySource(
query='select dia, import from DS1.t_27k where true',
use_standard_sql=True))
| beam.io.Write(beam.io.BigQuerySink(
output_table,
dataset='DS1',
project=project,
schema='dia:DATE, import:FLOAT',
create_disposition=CREATE_IF_NEEDED,
write_disposition=WRITE_TRUNCATE
)
)

我想问题是这条管道似乎需要一个临时数据集来完成工作。而且我无法强制定位此临时数据集。因为我的 DS1 在欧盟 (#EUROPE-WEST1) 而临时数据集在美国(我猜),任务失败:

WARNING:root:Dataset m-h-0000:temp_dataset_e433a0ef19e64100000000000001a does not exist so we will create it as temporary with location=None
WARNING:root:A task failed with exception.
HttpError accessing <https://www.googleapis.com/bigquery/v2/projects/m-h-000000/queries/b8b2f00000000000000002bed336369d?alt=json&maxResults=10000>: response: <{'status': '400', 'content-length': '292', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'expires': 'Sat, 14 Oct 2017 20:29:15 GMT', 'vary': 'Origin, X-Origin', 'server': 'GSE', '-content-encoding': 'gzip', 'cache-control': 'private, max-age=0', 'date': 'Sat, 14 Oct 2017 20:29:15 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="39,38,37,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Cannot read and write in different locations: source: EU, destination: US"
}
],
"code": 400,
"message": "Cannot read and write in different locations: source: EU, destination: US"
}
}

管道选项:

options = PipelineOptions()

google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.project = 'm-h'
google_cloud_options.job_name = 'myjob3'
google_cloud_options.staging_location = r'gs://p_df/staging' #EUROPE-WEST1
google_cloud_options.region=r'europe-west1'
google_cloud_options.temp_location = r'gs://p_df/temp' #EUROPE-WEST1
options.view_as(StandardOptions).runner = 'DirectRunner' #'DataflowRunner'

p = beam.Pipeline(options=options)

我该怎么做才能避免这个错误?

注意 错误仅在我以 DirectRunner 运行时出现。

最佳答案

错误 Cannot read and write in different locations 是很容易解释的,它的发生可能是因为:

  • BigQuery 数据集在欧盟,而您在美国运行 DataFlow
  • 您的 GCS 存储桶在欧盟,而您在美国运行 DataFlow

正如您在问题中指定的那样,您已经在欧盟的 GCS 中创建了临时位置,并且您的 BigQuery 数据集也位于欧盟,因此您也必须在欧盟运行 DataFlow 作业。

为了实现这一点,您需要在 PipelineOptions 中指定 zone 参数,如下所示:

options = PipelineOptions()

wo = options.view_as(WorkerOptions) # type: WorkerOptions
wo.zone = "europe-west1-b"


# rest of your options:
google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.project = 'm-h'
google_cloud_options.job_name = 'myjob3'
google_cloud_options.staging_location = r'gs://p_df/staging' # EUROPE-WEST1
google_cloud_options.region = r'europe-west1'
google_cloud_options.temp_location = r'gs://p_df/temp' # EUROPE-WEST1
options.view_as(StandardOptions).runner = 'DataFlowRunner'

p = beam.Pipeline(options=options)

关于python - bigquery 数据流错误 : Cannot read and write in different locations while reading and writing in EU,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46753904/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com