bitovi/github-actions-deploy-stackstorm

Add toggle for S3 bucket destruction

PhillypHenning opened this issue · 0 comments

Conversation originally had here

I’d be mighty careful with something like
bf21c2f
Something I tend to do, so I imagine others would as well is;
I keep multiple terraform state files in a single S3 bin and separate them by project name.
Example;

S3:phils-tf-states/project1/state.tfstate
S3:phils-tf-states/project2/state.tfstate

I’d probably throw a toggle in there for the user and default S3 bucket deletion to false.

Specifically I’m speaking about the contents of the S3 buckets.
If we create them then generally speaking we can trust we know the contents of the S3 bucket, being a state file as we don’t store anything but that there.
My concern is when people don’t naturally follow our footprint. If they specify a S3 bucket that is already in use and contains tfstates from other projects (which I tend to do), then deleting the S3 bucket means deleting the current context + all of the content not handled, specified or maintained by any of our products.