gruntwork-io/cloud-nuke

cloud-nuke s3 fails deleting buckets that are not empty

Closed this issue · 1 comments

version=v0.32.0
Command ran:

cloud-nuke aws \
  --resource-type s3 \
  --region us-west-2 \
  --config my/config_file \
  --older-than 200h
  --log-level debug

As the title suggests, this command fails to delete buckets that are not empty. It also ignores the log-level flag, so I can't see the reasons why it's failing to delete the objects in the bucket.

At the end of the run I get:
INFO No resources touched in this run.

Even though it's found resources to nuke:
THE FOLLOWING 36 AWS RESOURCES WILL BE NUKED:

Occasionally I get this, which I'm not sure how to interpret, since the header says Deleted Successfully, but I see errors

 Identifier                                            | Resource Type | Deleted Successfully                       |
| bucket-6716 | S3 Bucket     | ❌ 61 errors occurred:	* BucketNotEmpty: Th |
| ------------------------------------------------------------------------------------------------------------------ |
| bucket-6781 | S3 Bucket     | ❌ 61 errors occurred:	* BucketNotEmpty: Th |
| ------------------------------------------------------------------------------------------------------------------ |
| bucket-6765 | S3 Bucket     | ❌ 61 errors occurred:	* BucketNotEmpty: Th |
| ------------------------------------------------------------------------------------------------------------------ |
| bucket-6765 | S3 Bucket     | ❌ 61 errors occurred:	* BucketNotEmpty: Th |
| ------------------------------------------------------------------------------------------------------------------ |
| bucket-6781 | S3 Bucket     | ❌ 61 errors occurred:	* BucketNotEmpty: Th |

Hi @kate-syberspace, I'm trying to reproduce the issue that you are facing. Can you share the exact configuration of the s3 file you were trying to delete?

Specifically these:

  • object version
  • markers
  • prefixes/subdirectories
  • lifecycle policies
  • cross-region replication

These you have one or more of these configured/associated with the s3 bucket?