Unable to be used with terraform workspaces
eloo opened this issue · 3 comments
Describe the Bug
Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.
It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.
Using the workspace in the bucket and dynamo table name with case issues with the backend.tf
because this will alter the whole time.
Using enabled = false
will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :D
Expected Behavior
The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the enabled
flag but just skip_creation_if_resources_exists
or so on.
Steps to Reproduce
Steps to reproduce the behavior:
- Create two workspaces
- Apply the first workspace and see s3 bucket and dynamo table created
- Switch to second workspace
- Apply second workspace and see errors while s3 bucket and dynamo table creation
Additional Context
Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.
Thanks
Instead of setting enabled
to false, since you're working with workspaces you can have the default
workspace house the state for the backend, and after bootstrap work simply with other workspaces. Then, set enabled
as follows:
module "terraform_state_backend" {
enabled = terraform.workspace == "default"
source = "cloudposse/tfstate-backend/aws"
version = "0.38.1"
# ...
}
This way, the module will set the count
argument of every resource
block it uses to 0
, turning them into no-ops. Additionally, if you want to avoid provisioning infrastructure of your own save for the one supporting the S3+DynamoDB backend, you can separate the rest of your configuration into a module and use count
:
module "rest_of_my_config" {
# Avoid creating anything within if we're in the default workspace.
count = terraform.workspace == "default" ? 0 : 1
# ...
}
The module creates an S3 backend to be used by Terraform. Usually this is done once per company/organization, so I do not understand why you are trying to create one per workspace.
Whether or not you are using workspaces, it is your responsibility to ensure you are not creating duplicate resources by using different inputs for different instantiations of this module. This module provides several inputs you can use to vary the name of created resources:
- namespace
- tenant
- environment
- stage
- name
- attributes
Closing this out as it's a misunderstanding by the module consumer on how to use this module. @eloo if you're still struggling with how to use this module correctly following the above comments, please feel free to ask questions here, ping me, and I can try to help you understand the correct usage in your scenario.