gpt4 book ai didi

amazon-web-services - 使用 Athena Terraform 脚本

转载 作者:行者123 更新时间:2023-12-03 22:13:42 25 4
gpt4 key购买 nike

Amazon Athena 使用提交查询的用户的 IAM 凭证从输入的 Amazon S3 存储桶中读取数据; 查询结果 存储在 单独的 S3 存储桶 中。

这是 Hashicorp 站点 https://www.terraform.io/docs/providers/aws/r/athena_database.html 中的脚本

resource "aws_s3_bucket" "hoge" {
bucket = "hoge"
}

resource "aws_athena_database" "hoge" {
name = "database_name"
bucket = "${aws_s3_bucket.hoge.bucket}"
}

它说的地方
bucket - (Required) Name of s3 bucket to save the results of the query execution.

如何在 terraform 脚本中指定输入 S3 存储桶?

最佳答案

您将在 storage_descriptor 资源中使用 aws_glue_catalog_table 参数:

https://www.terraform.io/docs/providers/aws/r/glue_catalog_table.html#parquet-table-for-athena

以下是使用 CSV 文件创建表格的示例:

resource "aws_glue_catalog_table" "aws_glue_catalog_table" {
name = "your_table_name"
database_name = "${aws_athena_database.your_athena_database.name}"
table_type = "EXTERNAL_TABLE"

parameters = {
EXTERNAL = "TRUE"
}

storage_descriptor {
location = "s3://<your-s3-bucket>/your/file/location/"
input_format = "org.apache.hadoop.mapred.TextInputFormat"
output_format = "org.apache.hadoop.mapred.TextInputFormat"

ser_de_info {
name = "my-serde"
serialization_library = "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"

parameters = {
"field.delim" = ","
"skip.header.line.count" = "1"
}
}

columns {
name = "column1"
type = "string"
}

columns {
name = "column2"
type = "string"
}

}
}

关于amazon-web-services - 使用 Athena Terraform 脚本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52505817/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com