Miserlou/Zappa

Temporary S3 Bucket is not deleted

Opened this issue · 1 comments

Context

In my zappa_setting.yml, I set slim_handler=false and bucket_name=None. And it turns out that after several months, my S3 reaches bucket limit(200). It turns out that most of those buckets are created by Zappa and are empty.

Expected Behavior

With slim_handler=false and bucket_name=None, there should not be any s3 bucket created /left after deployment succeed.

Actual Behavior

Zappa will create a temporary empty s3 bucket and leave it there. One work around is prescribing the bucket_name in zappa_setting.yml but it would be better to automatically delete the temp bucket in S3.

Possible Fix

change this code to delete bucket with boto3

self.s3_client.delete_object(Bucket=bucket_name, Key=file_name)

Steps to Reproduce

  1. Set slim_handler=false and bucket_name=None in zappa_setting.yml
  2. Deploy lambda function
  3. Check s3 if there is an empty zappa bucket

Your Environment

  • Zappa version used: 0.46.1
  • Operating System and Python version: linux, python 3.6
  • The output of pip freeze:
  • Link to your project (optional):
  • Your zappa_settings.py:

I'd like to bump this issue for consideration.

I don't think this is actually a bug, since nothing in the docs suggest the bucket is cleaned up. However, the problem of running out of buckets is real. In particular, if using the global deployment option in conjunction with bucket_name=None, the default bucket limit will be reached after only a few updates.

The original requestor's suggestion doesn't quite work, because the zappa.core.Zappa.remove_from_s3 method is sometimes called while the bucket is still needed. I think a good approach is to add a new delete_s3_bucket option that defaults to False. This will allow us to maintain backwards compatibility. No surprise bucket deletions when users upgrade!

I've submitted a PR to add the new option: #1811