gpt4 book ai didi

Terraform 想要删除一个存储桶,并提示它不是空的。我不希望 terraform 删除存储桶。我怎么告诉它不要?

转载 作者:行者123 更新时间:2023-12-03 20:56:44 30 4
gpt4 key购买 nike

我有 terraform 正在生成以下登录申请:

Terraform will perform the following actions:

# aws_lambda_permission.allow_bucket must be replaced
-/+ resource "aws_lambda_permission" "allow_bucket" {
action = "lambda:InvokeFunction"
function_name = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
~ id = "AllowExecutionFromS3Bucket" -> (known after apply)
principal = "s3.amazonaws.com"
~ source_arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply) # forces replacement
statement_id = "AllowExecutionFromS3Bucket"
}

# aws_s3_bucket.bucket must be replaced
-/+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
acl = "private"
~ arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply)
~ bucket = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1" # forces replacement
~ bucket_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
force_destroy = false
~ hosted_zone_id = "FOOBAR" -> (known after apply)
~ id = "bucket-example-us-east-1" -> (known after apply)
~ region = "us-east-1" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
~ tags = {
~ "Name" = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)

versioning {
enabled = false
mfa_delete = false
}
}

# aws_s3_bucket_notification.bucket_notification must be replaced
-/+ resource "aws_s3_bucket_notification" "bucket_notification" {
~ bucket = "bucket-example-us-east-1" -> (known after apply) # forces replacement
~ id = "bucket-example-us-east-1" -> (known after apply)

~ lambda_function {
events = [
"s3:ObjectCreated:*",
]
~ id = "tf-s3-lambda-FOOBAR" -> (known after apply)
lambda_function_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
}
}

# module.large-file-breaker-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker"
function_name = "ProggyLargeFileBreaker"
handler = "break-large-files"
id = "ProggyLargeFileBreaker"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker/invocations"
last_modified = "2020-03-13T20:17:33.376+0000"
layers = []
memory_size = 3008
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyLargeFileBreaker-role20200310215329691700000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "Proggy-large-file-breaker/1.0.10/break-large-files-1.0.10.zip"
source_code_hash = "TbwleLcqD+xL2zOYk6ZdiBWAAznCIiTS/6nzrWqYZhE="
source_code_size = 7294687
~ tags = {
"Name" = "ProggyLargeFileBreaker"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.10"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"

environment {
variables = {
"ARCHIVE_BUCKET" = "Proggy-archive-assembly-us-east-1"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}

tracing_config {
mode = "PassThrough"
}
}

# module.notifier-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
function_name = "ProggyS3ObjectCreatedHandler"
handler = "s3-Proggy-notifier"
id = "ProggyS3ObjectCreatedHandler"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler/invocations"
last_modified = "2020-03-11T20:52:33.256+0000"
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyS3ObjectCreatedHandler-role20200310215329780600000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "s3-Proggy-notifier/1.0.55/s3-Proggy-notifier-1.0.55.zip"
source_code_hash = "4N+B1GpaUY/wB4S7tR1eWRnNuHnBExcEzmO+mqhQ5B4="
source_code_size = 6787828
~ tags = {
"Name" = "ProggyS3ObjectCreatedHandler"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.55"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"

environment {
variables = {
"FILE_BREAKER_LAMBDA_FUNCTION_NAME" = "ProggyLargeFileBreaker"
"Proggy_SQS_QUEUE_NAME" = "Proggy_Proggy_edi-notification.fifo"
"Proggy_SQS_QUEUE_URL" = "https://sqs.us-east-1.amazonaws.com/<secret>/Proggy_Proggy_edi-notification.fifo"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}

tracing_config {
mode = "PassThrough"
}
}

Plan: 3 to add, 2 to change, 3 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyLargeFileBreaker]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyS3ObjectCreatedHandler]
aws_lambda_permission.allow_bucket: Destroying... [id=AllowExecutionFromS3Bucket]
aws_s3_bucket_notification.bucket_notification: Destroying... [id=bucket-example-us-east-1]
module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyLargeFileBreaker]
aws_s3_bucket_notification.bucket_notification: Destruction complete after 0s
aws_lambda_permission.allow_bucket: Destruction complete after 0s
aws_s3_bucket.bucket: Destroying... [id=bucket-example-us-east-1]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyS3ObjectCreatedHandler]

Error: error deleting S3 Bucket (bucket-example-us-east-1): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: 0916517C5F1DF875, host id: +l5yzHjw7EMmdT4xdCmgg0Zx5W7zxpEil/dUWeJmnL8IvfPw2uKgvJ2Ee7utlRI0rkohdY+pjYI=

它想删除那个存储桶,因为我以前告诉它创建它(我认为)。但我不想让 terraform 删除那个桶。它尝试删除的存储桶正在被其他团队使用。

如何告诉 terraform 执行应用,但不删除该存储桶?

最佳答案

将 prevent_destroy 放在 S3 存储桶资源中,以确保它不会被任何机会删除。

  lifecycle {
prevent_destroy = true
}

从 S3 存储桶以外的 .tf 文件中删除所有资源定义。运行 terraform plan 以查看 Terraform 是否会破坏 S3 存储桶以外的内容。

如果显示预期,则运行 terraform apply。

然后重新创建除了 S3 存储桶之外您需要的资源,但这些资源被放置在不同的位置,这样 S3 存储桶将被排除在外。

或者,使用 terraform rm 从状态文件中删除 S3 存储桶以外的其他资源。然后运行 ​​terraform import 将它们导入到其他位置的新 tf 文件中。

关于Terraform 想要删除一个存储桶,并提示它不是空的。我不希望 terraform 删除存储桶。我怎么告诉它不要?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60677996/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com