gpt4 book ai didi

amazon-web-services - 如何使用ansible将文件夹递归上传到aws s3

转载 作者:行者123 更新时间:2023-12-03 15:14:06 26 4
gpt4 key购买 nike

我正在使用 ansible 来部署我的应用程序。
我想将我的垃圾 Assets 上传到新创建的存储桶,这是我所做的:{{hostvars.localhost.public_bucket}}是存储桶名称,{{client}}/{{version_id}}/assets/admin是包含要上传的多级文件夹和 Assets 的文件夹的路径:

- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put

这是错误消息:
   fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n    main()\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n    upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n    key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n    with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}

我浏览了文档,但没有找到 ansible s3_module 的递归选项.
这是一个错误还是我错过了什么?!

最佳答案

从 Ansible 2.3 开始,您可以使用: s3_sync :

- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/

注:如果您使用的是非默认区域,则应设置 region明确地,否则你会得到一个有点模糊的错误: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
这是一个完整的剧本,与您在上面尝试执行的操作相匹配:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1

笔记:
  • 您可能可以删除区域,我只是添加它来举例说明我上面的观点
  • 我刚刚添加了明确的键。您可以(并且可能应该)为此使用环境变量:

  • 来自文档:

    If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION

    关于amazon-web-services - 如何使用ansible将文件夹递归上传到aws s3,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38397638/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com