gpt4 book ai didi

python - 使用数据流的 GCS 文件流式传输(apachebeam python)

转载 作者:行者123 更新时间:2023-12-01 14:59:09 24 4
gpt4 key购买 nike

我有一个 GCS,我每分钟都会在其中获取文件。我使用 apache beam python sdk 创建了一个流式数据流。我为输入 gcs 桶和输出 gcs 桶创建了 pub/sub 主题。我的数据流正在流式传输,但我的输出是没有存储在输出桶中。这是我的以下代码,

from __future__ import absolute_import

import os
import logging
import argparse
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
from datetime import datetime
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.io.textio import ReadFromText, WriteToText

#dataflow_options = ['--project=****','--job_name=*****','--temp_location=gs://*****','--setup_file=./setup.py']
#dataflow_options.append('--staging_location=gs://*****')
#dataflow_options.append('--requirements_file ./requirements.txt')
#options=PipelineOptions(dataflow_options)
#gcloud_options=options.view_as(GoogleCloudOptions)


# Dataflow runner
#options.view_as(StandardOptions).runner = 'DataflowRunner'
#options.view_as(SetupOptions).save_main_session = True

def run(argv=None):
"""Build and run the pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--output_topic', required=True,
help=('Output PubSub topic of the form '
'"projects/***********".'))
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
'--input_topic',
help=('Input PubSub topic of the form '
'"projects/************".'))
group.add_argument(
'--input_subscription',
help=('Input PubSub subscription of the form '
'"projects/***********."'))
known_args, pipeline_args = parser.parse_known_args(argv)

# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
pipeline_options.view_as(StandardOptions).streaming = True
p = beam.Pipeline(options=pipeline_options)


# Read from PubSub into a PCollection.
if known_args.input_subscription:
messages = (p
| beam.io.ReadFromPubSub(
subscription=known_args.input_subscription)
.with_output_types(bytes))
else:
messages = (p
| beam.io.ReadFromPubSub(topic=known_args.input_topic)
.with_output_types(bytes))

lines = messages | 'decode' >> beam.Map(lambda x: x.decode('utf-8'))

class Split(beam.DoFn):
def process(self,element):
element = element.rstrip("\n").encode('utf-8')
text = element.split(',')
result = []
for i in range(len(text)):
dat = text[i]
#print(dat)
client = language.LanguageServiceClient()
document = types.Document(content=dat,type=enums.Document.Type.PLAIN_TEXT)
sent_analysis = client.analyze_sentiment(document=document)
sentiment = sent_analysis.document_sentiment
data = [
(dat,sentiment.score)
]
result.append(data)
return result

class WriteToCSV(beam.DoFn):
def process(self, element):
return [
"{},{}".format(
element[0][0],
element[0][1]
)
]

Transform = (lines
| 'split' >> beam.ParDo(Split())
| beam.io.WriteToPubSub(known_args.output_topic)
)
result = p.run()
result.wait_until_finish()

if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()

我做错了什么请有人向我解释。

最佳答案

WriteToPubSub 将数据写入 PubSub 主题,而不是 GCS 存储桶。您想要做的也许是使用 WriteToText,或使用 apache_beam.io.filesystems 将数据写入存储桶的 DoFn。

额外注意的是,它看起来不像您的 WriteToCsv 转换在任何地方使用。

关于python - 使用数据流的 GCS 文件流式传输(apachebeam python),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55045176/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com