- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我们有一个默认的 VPC。试图运行数据流作业。初始步骤(读取文件)设法处理 1/2 个步骤。获取 JOB_MESSAGE_ERROR: SDK harness sdk-0-0 disconnected
错误消息,但日志中没有其他内容。已尝试设置角色和 vpc 防火墙规则。
我想使用 Geobeam 图像(Apache Beam Python 3.9 SDK 2.41.0)运行数据流作业。我已将作业定义如下:
def run(pipeline_args, known_args):
import apache_beam as beam
from apache_beam.io.gcp.internal.clients import storage
from apache_beam.options.pipeline_options import PipelineOptions
from geobeam.io import GeoJSONSource, filebasedsource
from geobeam.fn import format_record, make_valid, filter_invalid
pipeline_options = PipelineOptions([
] + pipeline_args)
with beam.Pipeline(options=pipeline_options) as p:
(p
| beam.io.Read(GeoJSONSource(known_args.gcs_url, encoding='utf-8'))
| 'FilterCords' >> beam.Filter(lambda x: len(x[-1]["coordinates"]) > 1)
| 'MakeValid' >> beam.Map(make_valid)
| 'FilterInvalid' >> beam.Filter(filter_invalid)
| 'FormatRecords' >> beam.Map(format_record)
| beam.io.WriteToText(known_args.gcs_write_url)
)
if __name__ == '__main__':
import logging
import argparse
logging.getLogger().setLevel(logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument('--gcs_url')
parser.add_argument('--gcs_write_url')
known_args, pipeline_args = parser.parse_known_args()
run(pipeline_args, known_args)
我使用以下命令运行作业:
python -m main --runner DataflowRunner --project [[project_id]] \
--temp_location gs://[[temp_bucket_name]]/tmp \
--gcs_url gs://[[inputbucket_name]]/[[filename]].geojson \
--region europe-north1 --sdk_container_image gcr.io/dataflow-geobeam/example \
--gcs_write_url gs://gs://[[outputbucket_name]]/[[filename]]_processed.geojson \
--subnetwork [[full_link_to_subnet]]
我们已经设置了自定义默认 VPC,并且我在 GCP 中为计算虚拟机资源添加了入口/导出防火墙规则的推荐范围。我还为用于数据流作业的默认服务帐户赋予了以下角色:
我还在服务帐户上提供了我的用户角色:
它说作业已停止,但那是因为作业不会继续进行。我得到以下日志输出
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2022-10-18_05_33_31-17288646308046950877 is in state JOB_STATE_PENDING
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:31.708Z: JOB_MESSAGE_BASIC: Dataflow Runner V2 auto-enabled. Use --experiments=disable_runner_v2 to opt out.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:32.780Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2022-10-18_05_33_31-17288646308046950877. The number of workers will be between 1 and 1000.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:32.803Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2022-10-18_05_33_31-17288646308046950877.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:34.374Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-1 in europe-north1-b.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.092Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.109Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.141Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.160Z: JOB_MESSAGE_DEBUG: Combiner lifting skipped for step WriteToText/Write/WriteImpl/GroupByKey: GroupByKey not followed by a combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.184Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.200Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.226Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.243Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/InitializeWrite into WriteToText/Write/WriteImpl/DoOnce/Map(decode)
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.262Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3481>) into WriteToText/Write/WriteImpl/DoOnce/Impulse
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.278Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/DoOnce/Map(decode) into WriteToText/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3481>)
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.294Z: JOB_MESSAGE_DETAILED: Fusing consumer Read/Map(<lambda at iobase.py:908>) into Read/Impulse
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.310Z: JOB_MESSAGE_DETAILED: Fusing consumer ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/PairWithRestriction into Read/Map(<lambda at iobase.py:908>)
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.325Z: JOB_MESSAGE_DETAILED: Fusing consumer ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/SplitWithSizing into ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/PairWithRestriction
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.340Z: JOB_MESSAGE_DETAILED: Fusing consumer FilterCords into ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/ProcessElementAndRestrictionWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.356Z: JOB_MESSAGE_DETAILED: Fusing consumer MakeValid into FilterCords
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.372Z: JOB_MESSAGE_DETAILED: Fusing consumer FilterInvalid into MakeValid
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.387Z: JOB_MESSAGE_DETAILED: Fusing consumer FormatRecords into FilterInvalid
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.402Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/WindowInto(WindowIntoFn) into FormatRecords
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.417Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/WriteBundles into WriteToText/Write/WriteImpl/WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.432Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/Pair into WriteToText/Write/WriteImpl/WriteBundles
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.447Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/GroupByKey/Write into WriteToText/Write/WriteImpl/Pair
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.464Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToText/Write/WriteImpl/Extract into WriteToText/Write/WriteImpl/GroupByKey/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.489Z: JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.504Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.519Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.535Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.624Z: JOB_MESSAGE_DEBUG: Executing wait step start19
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.655Z: JOB_MESSAGE_BASIC: Executing operation Read/Impulse+Read/Map(<lambda at iobase.py:908>)+ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/PairWithRestriction+ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.668Z: JOB_MESSAGE_BASIC: Executing operation WriteToText/Write/WriteImpl/DoOnce/Impulse+WriteToText/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3481>)+WriteToText/Write/WriteImpl/DoOnce/Map(decode)+WriteToText/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.682Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:33:35.696Z: JOB_MESSAGE_BASIC: Starting 1 workers in europe-north1-b...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2022-10-18_05_33_31-17288646308046950877 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:34:21.585Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:37:30.456Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:42:40.315Z: JOB_MESSAGE_BASIC: Finished operation Read/Impulse+Read/Map(<lambda at iobase.py:908>)+ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/PairWithRestriction+ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:42:40.354Z: JOB_MESSAGE_DEBUG: Value "ref_AppliedPTransform_Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_6-split-with-sizing-out3" materialized.
INFO:apache_beam.runners.dataflow.dataflow_runner:2022-10-18T12:42:42.422Z: JOB_MESSAGE_ERROR: SDK harness sdk-0-0 disconnected.
然后它再次尝试将 worker 数量增加到 1,然后它立即收到 JOB_MESSAGE_ERROR: SDK harness sdk-0-0 disconnected.
一遍又一遍。附带说明 - 在管道实际启动之前还需要大约 10 分钟。
我设法让它与 DirectRunner
选项一起工作。我不知道去哪里看?会不会跟VPC有关?
我尝试在原生/默认图像和地理束图像上运行字数统计示例,它适用于原生/默认图像,但不适用于地理束图像。
为什么会这样?
最佳答案
经过反复试验,我发现geobeam 基础镜像的python 版本必须与您机器上的本地python 版本相匹配,否则将无法运行。在回答时,这是 python 3.8。
关于python - Apache Beam Pipeline 与 DirectRunner 一起运行,但在初始读取步骤期间因 DataflowRunner(SDK 线束 sdk-0-0 断开连接)而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/74111801/
因此,我在 Apache Beam 中实现此作业,最终在 Dataflow 中运行它。因此,我使用 Direct Runner 进行了测试,但是当我将其更改为 Dataflow Runner 时,它崩
我正在尝试运行一个管道,我能够使用 DirectRunner 成功运行它, 在谷歌云数据流上。当我执行这个 Maven 命令时: mvn compile exec:java \ -Dexec.
我的 DataFlow 作业从 GS 存储桶中读取一个 CSV 文件,查询另一项服务以获取额外数据并将其写入新的 CSV 文件并存储回存储桶,但它似乎在开始获取输入 CSV 文件之前就掉了。 .. 这
我正在尝试使用 Apache Beam 0.6.0 在 GCP 上启动数据流作业。我正在使用 shade 插件编译一个 uber jar,因为我无法使用“mvn:execjava”启 Action 业
我们目前正在开发一个 Apache Beam 管道,用于从 GCP Pub/Sub 读取数据并将收到的数据写入 AWS S3 中的存储桶。 我们在 beam-sdks-java-io.amazon-w
Can send the java code but currently, it's not necessary. 我遇到一个问题,因为当我以(DirectRunner - 使用 Google VM
我正在创建一个演示管道,以使用我的免费 Google 帐户通过 Dataflow 将 CSV 文件加载到 BigQuery 中。这就是我所面临的。 当我读取 GCS 文件并记录数据时,效果非常好。
我想运行 WordCount来自 https://beam.apache.org/get-started/quickstart-java/ 的 java 示例,但不知何故我收到一个错误,即 Class
我有一个管道,它生成的数据流图(序列化 JSON 表示)超出了 API 允许的限制,因此无法像通常那样通过 apache beam 的数据流运行程序启动。并且使用指示参数 --experiments=
我在下面的beamSql程序中指定了数据流运行器: DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipeli
我可以使用 DirectRunner 运行这段代码,它运行良好。使用 DataflowRunner 它会崩溃: TypeError: process() takes exactly 4 argumen
长话短说 我们有一个默认的 VPC。试图运行数据流作业。初始步骤(读取文件)设法处理 1/2 个步骤。获取 JOB_MESSAGE_ERROR: SDK harness sdk-0-0 disconn
长话短说 我们有一个默认的 VPC。试图运行数据流作业。初始步骤(读取文件)设法处理 1/2 个步骤。获取 JOB_MESSAGE_ERROR: SDK harness sdk-0-0 disconn
我是一名优秀的程序员,十分优秀!