hashicorp/terraform

Add support for lifecycle meta-argument in modules

Opened this issue Β· 80 comments

Current Terraform Version

Terraform v0.14.3

Use-cases

Terraform currently only allows the lifecycle meta-argument to be used within the declaration of a resource. It would be really useful if users were able to specify lifecycle blocks in modules that can then be applicable to some/all of the resources within that module.

The main use-case I have is being able to use the ignore_changes to instruct terraform to ignore changes to resources or particular attributes of resources.

Proposal

For example, lets assume I create a terraform module to be used in AWS, and as part of that module I create a dynamodb table. DynamoDB tables (among other resources) have the ability to autoscale, the autoscaling configuration is defined by another resource. Consequently, a lifecycle block must be used to prevent the resource that creates the dynamodb table from modifying the read/write capacity.

In this scenario I currently have to choose to either to support autoscaling or to not support autoscaling, as I cannot pass define a lifecycle block with the ignore_changes argument.
Ideally, I'd like to be able to do something like this:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  hash_key = "FooID"
  attributes = [
    {
      name = "FooID"
      type = "S"
    }
  ]
  lifecycle {
    ignore_changes = [
      aws_dynamodb_table.table.read_capacity,
      aws_dynamodb_table.table.write_capacity
    ]
  }
}

Being able to apply lifecycle blocks similarly to the way shown above, would enable me to manage the attributes of this resource outside of this module (whether that's via some automated process, or another resource/module definition), and would allow more people to use this module as it would be usable for a wider range of use-cases.

The documentation states that the lifecycle block can only support literal values, I'm unsure if my proposal would fall under that, as its referring to resources (and possibly attributes) that are created within the module itself πŸ€”

References

I am also interested by such feature though it would be to use a prevent_destroy lifecycle directive.

It will be very useful with AWD RDS module.

module "db" {
  source = "terraform-aws-modules/rds/aws"
  ...
  snapshot_identifier = "..."
  password = "..."
  
  lifecycle {
    ignore_changes = [
      snapshot_identifier,
      password
    ]
  }
  ...
 }

This would be a great feature, especially now that Terraform Modules support for_each

My main use case is prevent_destroy on DDB and S3, both persistent end-user data that I want to preserve against the accidental replacement of objects

Good addition as more and more people are starting to use modules like resources so being able to use the lifecycle block on the module level would be amazing

It feels like having lifecycle blocks support dynamic configuration in general would be better than adding support for lifecycle blocks I modules. It would mean modules wouldn't need special support for this, and instead vars and custom logic could be used to set different lifecycle options on resources inside the module (ensuring you can encapsulate the logic, which the approach suggested in this ticket doesn't allow for).

Hi @antonbabenko, any possibility considering below request?
#28913

jbcom commented

This would also be incredibly helpful for preventing things from ever being part of a destroy

Please add this functionality. Modules are severely limited if you can't use lifecycle metadata when calling them.

This will be a very useful feature instead of editing the module definition to support lifecycle ignores

I also ran into issues with this using aminueza/terraform-provider-minio

+1 was really shocked to discover this isn't a module level thing.

+1 super key feature for resources like KMS

+1, absolutely necessary feature, especially to prevent deletion of certain resources, as others have mentioned above.

+1 just ran into this and also shocked it's not here. If I had the time, I'd see about contributing this change. My use case is just like @jaceklabuda has but for the engine_version since I have auto update on.

module "rds" {
  source = "terraform-aws-modules/rds/aws"
  ...
  engine_version = "5.7.33"
  
  lifecycle {
    ignore_changes = [
      engine_version
    ]
  }
  ...
 }

@OGProgrammer You can set engine_version = "5.7" instead of "5.7.33" in the RDS module you are using. This will prevent it from showing a diff every time the patch version is updated. aws_db_instance docs for engine_version

Just sharing my experience here, in case it helps :)
if you do not set the complete version (major + minor and patch number), AWS always offers the latest patch number.
That means, if the version is set to 5.7 and at the time of deployment, latest version offered by aws is 5.7.30 and that is installed, the next time you deploy the same package and if the AWS offering is 5.7.35 (like new patches published), Terraform will show a diff and and applying the changes usually leads to an outage, unless you have set scheduled maintenance windows (which prevents patch upgrades).
So, also, I think setting exact versions are better than ignoring them via lifecycle block, because it can make troubleshooting easier. It is best that patch versions used in the code are updated on a regular maintenance periods.

A barebones implementation of the prevent_destroy for modules should prevent destruction of the module itself (via a terraform destroy command), not destruction of resources inside it.

Additional work to allow resource specific lifecycles within the module, or to prevent all resources in the module from being destroyed would be nice as well, but I don't see them as immediately essential.

In case it helps: This would also be helpful for blue/green deployments where there's a 50% chance of the primary listener having its default_action updated with the wrong target group (in the case of having two TGs). Namely in the terraform-aws-modules/alb/aws module. Using the module beats having to manage several different TF resources.

For anyone who encounters this issue and wants to protect module resources, we were able to find a bit of a hacky but workable solution within a wrapper module using:

resource "null_resource" "prevent_destroy" {
  count = var.prevent_destroy ? 1 : 0

  depends_on = [
    module.s3_bucket ## this is the official aws s3 module
  ]

  triggers = {
    bucket_id = module.s3_bucket.s3_bucket_id
  }

  lifecycle {
    prevent_destroy = true
  }
}

So far it seems to be a 1 way flag which can't be turned off but works well to protect buckets where content recovery would be a lengthy & disruptive task.

We also could really do with this feature. We have a reasonably extensive library of terraform modules wrapping everything from EC2 instances to application stacks. Taking the EC2 module as an example we use a data source like the example from the docs to supply a "latest" ami at build time

data "aws_ami" "example" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "app-server"
    Tested = "true"
  }
}

Most infrastructure is immutable so a later AMI results in a recreation of any EC2 instances sourced from the module, but some infra we'd like to use ignore_changes for the AMI like you might with a resource. This proposal would make achieving that much easier.

I want to parametrize the create_before_destroy value of an EC2 instance in a module. Modules for different use cases will have different create_before_destroy behavior.

I currently stand up Azure infrastructure, namely vnets and subnets through a module. We have custom route tables that have Next Hop Ip set. With deployed NVA's like fortigates and F5's they change the next hop IP dependant on which device is currently active. When a pipeline is run it sees the changes and reverts. Not every scenario is like this and would rather place a lifecycle in the module pull for a specific repo rather than in the module for all.

I just ran into this with a custom module that we're using to wrap AWS. We see a lot of noise around secret values, and it'd be nice to tell it not to worry about it.

Has there been any movement or thought about this from the devs? I still can't find a workaround for the primary listener of an ALB having its default_action updated in-place with the wrong target group (in the case of having two TGs), using one of the AWS TF modules. (If anyone knows of a workaround I'd be very interested--I don't think preventing destruction works, though.)

Hmm,
I had to use some kind of workaround in order ec2 instances would not be replaced by dynamically provided AMI id, but I don't like it, the use of modules is very spread and lifecycle is pretty basic, I tried to use some public domain modules (terraform-aws-modules by @antonbabenko - Kudos for his hard work) but it seems it lacks some basics which we use on-premise, surely in our modules we use lifecycle inside a resource in a module and it is not provided as an argument to a module.
I would like to see ignore_changes be provided dynamically through the module.

It has been a while since this was opened, is there any workaround or ETA on getting this?
My usecase is to be able to ignore desired_size config of eks managed node group scaling config so that it does not tear down my nodes as per desired size whenever there's an update to the infra config. While I can do it through https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group#ignoring-changes-to-desired-size, there is still no way of doing this if am using aws module to create eks cluster.

Any news?

It is necessary to extend the universal use of the modules.

+1 this would make life so much better

+1 this would be very helpful

+1 Very needed and will be extremely useful!

crw commented

Just a reminder to please use the πŸ‘ reaction on the original post to upvote issues - we do sort by most upvoted to understand which issues are the most important. This also reduces "noise" in the notification feed for folks following this issue. Thanks!

@crw is there a threshold needed to get this enhancement going? this issue has been open a while

crw commented

We have been maintaining a list of the Top 25 most-upvoted/commented issues. It isn't obvious from the changelog but we have been picking off a Top 25 issue (usually more than one) per release. This issue is attached to one of the Top 25 (often a few issues are thematically or systematically adjacent), but I have no update on it beyond that.

Here is similar issue with 132 πŸ‘ upvotes.
#21546

If two issues are summed up, they will rise to top 10.
https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc

+1 for the important feature.

+1 for feature

+1 for the feature

We use Azure Policy to set tags at a resource Group level so we want to tell Terraform to ignore all changes to these tags at a resource level.

For example:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  lifecycle {
    ignore_changes = [
      tags["CostCenter"],
      tags["ProjectName"]
    ]
  }
}

+1 for feature

We also could really do with this feature. We have a reasonably extensive library of terraform modules wrapping everything from EC2 instances to application stacks. Taking the EC2 module as an example we use a data source like the example from the docs to supply a "latest" ami at build time

data "aws_ami" "example" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "app-server"
    Tested = "true"
  }
}

Most infrastructure is immutable so a later AMI results in a recreation of any EC2 instances sourced from the module, but some infra we'd like to use ignore_changes for the AMI like you might with a resource. This proposal would make achieving that much easier.

My use case is exactly this. Please implement this feature

crw commented

Thanks for your enthusiasm on this feature request! To respect the GitHub notifications queue of everyone on this thread, please use the πŸ‘ emoji on the original issue description to indicate your support for this issue. Please avoid using "+1" comments as they spam everyone else on the thread. Thanks for your consideration!

Hi Team, what is the priority of this item? I don't see any mentions about it being on the backlog or road map.

This is a very important feature as some resources take quite some time to replace and replacing the database is not good in production.

It's been 2 years.

The most we have @gruckion is what @crw said above: top 25 is on their hit list.
There is no official roadmap from what I know.

Upvote and hope.

+1 this would make using modules more easier and flexible.

crw commented

@gruckion @chefcai This is correct, this specific issue not on the roadmap for 1.4 at this time but it is on our radar as a top issue.

+1 for the feature

We use Azure Policy to set tags at a resource Group level so we want to tell Terraform to ignore all changes to these tags at a resource level.

For example:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  lifecycle {
    ignore_changes = [
      tags["CostCenter"],
      tags["ProjectName"]
    ]
  }
}

@sebastianrogers change the policy effect to modify, add a sys managed identity with rights to change tags back if changed, and walk away, jobs a good un ;-)

Hi @PauloColegato we do that but

  1. We are doing automatic drift detection, so this means every single Azure Resource below Resource Group level is reported as having drifted from its definition by Terraform.
  2. The SOC notices unplanned changes to Azure Resources and flags them up as suspicious activity.
  3. Some of these tags have owners who are allowed to change them, our governance makes this explicit, so in this case when Terraform is run it changes them back again.

In essence Tags are managed by Azure Policy and not Terraform we need to be able to simply tell Terraform that it is not concerned with them at all.

Hope this makes clear why you suggestion does not result in 'bish, bosh, job's a good'un' but rather 'don't worry just use even more gaffer tape' :)

Hi @PauloColegato we do that but

  1. We are doing automatic drift detection, so this means every single Azure Resource below Resource Group level is reported as having drifted from its definition by Terraform.
  2. The SOC notices unplanned changes to Azure Resources and flags them up as suspicious activity.
  3. Some of these tags have owners who are allowed to change them, our governance makes this explicit, so in this case when Terraform is run it changes them back again.

In essence Tags are managed by Azure Policy and not Terraform we need to be able to simply tell Terraform that it is not concerned with them at all.

Hope this makes clear why you suggestion does not result in 'bish, bosh, job's a good'un' but rather 'don't worry just use even more gaffer tape' :)

@sebastianrogers i do love the gaffer tape approach!

There is another policy that tags resources, resource groups have a separate policy.

Remove the config that resources inherit from RG, deploy resource policy same as RG policy, and watch people get bored changing resource tags as it will change em right back (they can add additional tags no problem)!

Just remember to tag your resources, or each TF run will wipe, which is a drawback of this.

Lastly tell you SOC to amend their alerts, and shhhhhhhhhhhhhhh, its only tags :)

Thank you @nlitchfield tags can contain information that is sensitive your examples are very good ones.

"Lastly tell you SOC to shhhhhhhhhhhhhhh its only tags :)"

I notice the use of 'only' in the above and really must blog about why you should always be very suspicious if either the word 'only' or 'just' are used when discussing technology. They indicate something someone doesn't want to deal with and almost certainly should.

Thank you @nlitchfield tags can contain information that is sensitive your examples are very good ones.

"Lastly tell you SOC to shhhhhhhhhhhhhhh its only tags :)"

I notice the use of 'only' in the above and really must blog about why you should always be very suspicious if either the word 'only' or 'just' are used when discussing technology. They indicate something someone doesn't want to deal with and almost certainly should.

@sebastianrogers there are bigger things to worry about than just tags in my experience.

I would also take your assumption of my comments to indicate you are someone who jumps to conclusions about people, when they almost certainly shouldn't - especially if they are just trying to help you.

Perhaps have a look in the mirror buddy.

Have a cracking day and i wish all the best in your endeavours, hopefully you can sort your tagging woes out, along with your attitude.

+1 pleaseee

This would be highly helpful for also leveraging the precondition / postcondition arguments in module creation since this is not currently possible AFAIK.

+1, absolutely necessary feature!

pat-s commented

Can this issue be locked?

Can this issue be locked?

Why? The discussion shows this would be a widely appreciated feature. What does closing off the discussion achieve?

Can this issue be locked?

Why? The discussion shows this would be a widely appreciated feature. What does closing off the discussion achieve?

The etiquette for expressing your appreciation for an issue, is to add a 'thumbs up' emoji to the original post. Commenting with "+1, I would ❀️ this too" is typically considered bad manners.

Hello All!

I see that this is getting a lot of attention, but we don't yet have a clear goal in this issue. A lifecycle block is not applicable to a module, because a module has no lifecycle to which it can be applied. It does not store data, nor does it directly set any ordering of resource evaluation. Resources within a module may be evaluated before or after resources in any other module depending on the interdependencies of those resources. Preconditions, postconditions, replace_triggered_by, etc. also don't apply for the same reasons, there is not a clear definition of "before", "after", or replacement in module evaluation, at least in a sense that would be intuitive from the configuration.

I think the more general idea here is to have a way to override resource configuration from outside of a module. This is difficult for many reasons, some of which have been outlined already. The lifecycle block in particular contains special references which are relative to the containing resource, and themselves are not interpolated. What we would end up with essentially is an ad-hoc templating system, which has the ability to lead to a lot of unintended interactions within the language.

It's not that we're not considering these features at all, but I just want to convey that issues like this are often much more nuanced than they may appear on the surface.

@jbardin I think what could happen here and still allow for the module architecture to work is to only allow a specific subset of the lifecycle arguments and the ones that are allowed are propogated to every individual resource in the module as if they had their own individually declared. For instance let's say I want to prevent the destruction of all resources in a module, prevent_destroy would be handy to declare and propogate down. Replace_triggered_by could potentially work. Pre and post conditions I think are out. Create_before_destroy is a weird one, It almost has to be combined with a depends_on block at the module level for people to get use out if it if I am thinking of it correctly and even then it could have unintended consequences due to its nature.

Thanks @Shocktrooper, the fact that it only works on a subset of features, or in certain combinations, or with a list of exceptions when describing the feature is usually a red flag when evaluating a potential design. For example, replace_triggered_by specifically cannot work like that, because the reference needs to be within scope of the resource to which it is applied, and if that in-scope resource could be referenced from outside the module, applying it to all resources within a module will create cycles.

The prevent_destroy feature is one which is closer to theoretically possible, but since the actual meaning of that feature is more along the lines of "prevent-replace", and a module itself has no changes to prevent, it risks being even more confusing than the existing resource feature which users historically have not found very useful. The only time it would come into play would be when the user directly plans a destroy operation, in which case the same behavior is had from a single null_resource in the root module with prevent_destroy=true.

This is why I think the more general problem statement of "alter resource configuration from outside the module" better encompasses what users appear to be trying to do, rather than prescribing solutions which are not a good fit within the current design.

+1 for the feature

We use Azure Policy to set tags at a resource Group level so we want to tell Terraform to ignore all changes to these tags at a resource level.

For example:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  lifecycle {
    ignore_changes = [
      tags["CostCenter"],
      tags["ProjectName"]
    ]
  }
}

We have exact same scenario now. We've been forced to apply Azure tags at resource group level using Azure Policy. The very same policy propagates tags to all resources within RG. For resources within root module we simply configure Lifecycle's ignore changes feature to ignore any changes in tags, except for the RG. However, modules made this more complicated... modules do not support lifecycle block AT module block (e.g. it's not possible to pass the intention to ignore certain changes to resources managed by the module)... as such, we see two options

  1. Change module code to add lifecycle block to each resource deployed by the module
  2. Accept the fact that Terraform will detect drift on each run of the plan phase (changed tags)

Both options are nasty... modules should be universal as much as possible, hence hardcoding lifecycle to ignore tags within a module goes against this principle. Having drift reported for each merge request is not ideal either... instead of seeing REAL changes in a summarized report, our engineers and reviewers will have to drill down to terraform plan logs to actually see what's changing.

A proper, more strategical solution, would be highly appreciated - such as an ability to pass lifecycle property(ies) into the module... or some other alternative

+1 tracking this feature, would be helpful for ignoring things such as asg_desired_size on our EKS cluster, as it may be scaled via the terminal. It is part of an external module, so I can't add the lifecycle to the resource directly.

Hello.

Sorry for adding repetitive (maybe) argument here.

Our use case is a module which we need to use for both creation (apply) and deletion (destroy).

In order to align terraform to the rest of our pipeline itΒ΄d be great to be able to declare the lifecycle block in the root module instead of the main module.

As of:

{
  "module": {
    "pubsub_topic": {
      "source": "git::ssh://git@git.url.ur:7999/git-project/terraform-gcp-pubsub-topic.git",
      "pubsub_topics": "${var.pubsub_topics}",
      "lifecycle":{
        "prevent_destroy" : true
      }
    }
  },
  "variable": {
    "pubsub_topics": {
      "default": {}
    }
  }
}

Regards

we are looking for having this feature. This feature is critical for the adoption of modules. Currently lifecycle policy and module are not working very well - almost in a collision course. I cannot have resource block in a module that automatically prevent_destroy if my env is production.

Any news? πŸ‘€

+1

crw commented

Thanks for your interest in this issue! This is just a reminder to please avoid "+1" comments, and to use the upvote mechanism (click or add the πŸ‘ emoji to the original post) to indicate your support for this issue. We are aware of this issue (it is one of the highest-upvoted issues, currently 4th highest upvoted) and we do not have any updates to share. Thanks again for the feedback!

kderck commented

This would help with separation of concerns. I have a module that creates a resource, sometimes I want to ignore changes, other times I do not want to ignore changes.

In my company we stopped the free publication of a module that deploys a fully functional kubernetes cluster for AWS production because the cluster cloud controller generates changes in subnets, groups, tags, etc. and the terraform module tries to replace these changes constantly. We didn't find a reliable workaround for this.

I use the Opensearch Service, but this service itself does not have delete protection, so I have had incidents in the past where I have unintentionally deleted resources.
We have sent a feature request to AWS, but not all features have delete protection, so it would be very useful if Terraform could provide protection.

This would help in avoiding things from destroy

We use GuardDuty. We have this configured at the organizational level. Being able to ignore_changes might solve the problem we have of our security account trying to delete the Detectors (because it's not in our configs for that account).

We use GuardDuty. We have this configured at the organizational level. Being able to ignore_changes might solve the problem we have of our security account trying to delete the Detectors (because it's not in our configs for that account).

Never mind. Didn't need the ignore_changes after all. We upgraded versions of the terraform-aws-service-catalog and there were changes to the resource naming. A terragrunt import and terragrunt state rm fix the issue.

Is some one actually doing something about this? Or is this a zombie ticket?

Would be great, thanks!

Precondition are another useful feature in lifecycle blocks I would like to have for module references.

Any updates to this? I realize there isn't much call for it since there's only over eleven hundred thumb-ups, but it would be kinda nice, y'know?

I would like to see this feature as well. We encapsulate resources in "base-modules" at my current customer, and those modules are re-used in many terraform projects to deploy solutions to Azure. In some cases things like standard tags we apply to those resources are overwritten by policy on Azure, triggering changes in every terraform run. Though what @chancez proposes would work for us, I feel like being able to just ignore these changes at the module level when the occasional need arises would feel more natural/cleaner/easier than modifying the interface of our "base-modules" to support passing lifecycle arguments to any of the underlying resources.