Unlike the S3 module, there is currently no built-in way to recursively sync buckets to disk.
In theory, you could try to collect keys for download using
- name: register keys for syncronization s3: mode: list bucket: hosts object: /data/* register: s3_bucket_items - name: sync s3 bucket to disk s3: mode=get bucket=hosts object={{ item }} dest=/etc/data/conf/ with_items: s3_bucket_items.s3_keys
Although I often see this solution, it does not seem to work with current versions of ansible / boto due to an error with S3 subdirectories (see this error report for more information), as well as an indispensable S3 module that does not create subdirectories for keys. I believe that it is also possible that you will encounter some memory problems using this method when synchronizing very large buckets.
I would also like to add that you most likely do not want to use the credentials encoded in your books - I suggest you use IAM EC2 instance profiles , which are much safer and more convenient.
The solution that works for me will be as follows:
- name: Sync directory from S3 to disk command: "s3cmd sync -q --no-preserve s3://hosts/{{ item }}/ /etc/data/conf/" with_items: - data
source share