gpt4 book ai didi

amazon-web-services - 在 AWS Glue pySpark 脚本中使用 SQL

转载 作者:行者123 更新时间:2023-12-04 14:39:44 25 4
gpt4 key购买 nike

我想使用 AWS Glue 将一些 csv 数据转换为 orc。
我创建的 ETL 作业生成了以下 PySpark 脚本:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "tests", table_name = "test_glue_csv", transformation_ctx = "datasource0")

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("id", "int", "id", "int"), ("val", "string", "val", "string")], transformation_ctx = "applymapping1")

resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")

dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")

datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://glue/output"}, format = "orc", transformation_ctx = "datasink4")
job.commit()

它获取 csv 数据(来自 Athena 表 tests.test_glue_csv 指向的位置)并输出到 s3://glue/output/ .

如何在此脚本中插入一些 SQL 操作?

谢谢

最佳答案

您应该首先从动态框架创建一个临时 View /表

dyf.toDF().createOrReplaceTempView("view_dyf")

在这里, dyf是您的动态框架。

然后,使用您的 spark 对象对其应用 sql 查询
sqlDF = spark.sql("select * from view_dyf")
sqlDF.show()

关于amazon-web-services - 在 AWS Glue pySpark 脚本中使用 SQL,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45814424/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com