hashicorp/terraform

Display "value of 'count' cannot be computed" when passing list or map with computed value to the module

gongo opened this issue ยท 38 comments

gongo commented

Hi.

When passing a list or map with computed value to the module, terraform plan displays a message like a title.

$ terraform plan
Error configuring: 1 error(s) occurred:

* aws_sns_topic_subscription.event-test: value of 'count' cannot be computed

Terraform Version

$ terraform -v
Terraform v0.8.1

This error has occured from v0.8.0

Affected Resource(s)

(Maybe) Terraform's core

Terraform Configuration Files

https://gist.github.com/gongo/362975d478a9f4b85b3a213ddcc4d0cf

Debug Output

https://gist.github.com/gongo/362975d478a9f4b85b3a213ddcc4d0cf#gistcomment-1952568

Panic Output

No panic.

Expected Behavior

No error occurs.

Actual Behavior

Error occurs ๐Ÿ˜ฑ

Steps to Reproduce

  1. Input Terraform configuration like above.
  2. terraform plan
gongo commented

Workaround 1

Pass count directory.

--- a/main.tf
+++ b/main.tf
@@ -23,4 +23,6 @@ module "sns_subscription_to_lambda" {
     "${aws_sns_topic.test1.arn}",
     "${aws_sns_topic.test2.arn}",
   ]
+
+  topic_arn_count = 2
 }
diff --git a/module/sns_topic_subscription.tf b/module/sns_topic_subscription.tf
index 366d76b..2a5de2c 100644
--- a/module/sns_topic_subscription.tf
+++ b/module/sns_topic_subscription.tf
@@ -2,8 +2,12 @@ variable "topic_arns" {
   type = "list"
 }

+variable "topic_arn_count" {
+  type = "string"
+}
+
 resource "aws_sns_topic_subscription" "event-test" {
-  count = "${length(var.topic_arns)}"
+  count = "${var.topic_arn_count}"

   topic_arn = "${element(var.topic_arns, count.index)}"
   protocol  = "lambda"

Workaround 2

Do not pass computed value

--- a/main.tf
+++ b/main.tf
@@ -20,7 +20,7 @@ module "sns_subscription_to_lambda" {
   source = "./module"

   topic_arns = [
-    "${aws_sns_topic.test1.arn}",
-    "${aws_sns_topic.test2.arn}",
+    "arn:aws:sns:ap-northeast-1:012345678901:event1",
+    "arn:aws:sns:ap-northeast-1:012345678901:event2",
   ]
 }

This is the correct behavior not because you're passing in a list, but because you're performing a function call length on a computed value. We're hoping to relax this constraint over time but at the moment that is not allowed and it appears you've found the workarounds that work:

Workaround 1: You didn't use length
Workaround 2: You passed static values (non-computed)

gongo commented

@mitchellh

Thanks for reply! I understood.

We're hoping to relax this constraint over time

I expect it ๐Ÿ˜„

Just bit by this. Is there another issue to follow for progress on when this is supported?

Also, this error message is confusing. Could it be updated to value of "count" parameter cannot depend on a computed value (link to docs or this GH issue)?

Likewise, just got caught out by this today too. Really hoping this is being tracked somewhere ๐Ÿ˜„

Huge thanks everyone
Fotis

This was very useful to perform conditional updates in the module.
Very sad to see it gone in the new version.

Thanks for the workarounds.

Dupe of #1497 I believe.

We're also using computed count for conditionals. Can't see how to properly upgrade to a newer terraform version without braking the functional. Both workarounds make you switch from a dynamic to a static count value.

so, i find myself doing something like the following (especially when testing modules):

// module/foo.tf
variable "ids" {
  type = "list"
}

resource "some_resource" "foo" {
  count = "${length(var.ids)}"
}

// project.tf

module "foo" {
  ids = ["${some_resource.bar.id}", "${some_resource.baz.id}"]
}

In this case, I'm using length but couldn't the length value be computed statically even though each element is computed?

This is strange, but I have module that creates list of aws_instances and then I pass their IDs as an output value. Then that list is passed to another module. Everything is fine when I define instance like this:

resource "aws_instance" "client" {
  ami                  = "${var.ami}"
  instance_type        = "${var.instance_type}"
  subnet_id            = "${element(var.private_subnets, count.index)}"
  iam_instance_profile = "${aws_iam_instance_profile.client.name}"

  key_name = "${var.key_name}"

  count = 5

  vpc_security_group_ids = [
    "${var.security_group_ids}",
    "${aws_security_group.all_egress.id}",
  ]

  tags {
    App  = "client"
    Name = "client-${count.index}"
  }

  // user_data = "${file("${path.module}/files/client.cloud-config.yml")}"
}

However as soon as I uncomment user_data value everything goes south and I get an error:

* module.cluster-lb-test.aws_elb_attachment.instances: aws_elb_attachment.instances: value of 'count' cannot be computed

Hi @hauleth! Sorry for the confusing behavior there.

What's going on here is that changing user_data is a "forces new resource" change, which means that current the EC2 instance must be deleted and a new one replaced. At that point, Terraform doesn't know the ids of the new instances (since they've not been created yet) and due to the limitations described in this issue therefore count cannot be populated.

Eventually something like #4149 will make this smoother, but for now the workaround is to manually target that EC2 instance for replacement first, thus allowing Terraform to complete the instance replacement before trying to deal with the count attribute:

terraform plan -target=aws_instance.client -out=tfplan
(review the plan; should contain only the aws_instance and its dependencies)
terraform apply tfplan
terraform plan -out=tfplan
(review the plan; should now contain anything else that needs to be updated as a result)
terraform apply tfplan

In principle Terraform should be able to tell how many instances there are even though it doesn't yet have their ids, but currently that doesn't work due to some limitations of the interpolation language. Hopefully we can make that work better in future too, which may avoid the need for a two-step apply process in this specific situation.

Just got bitten by this issue as well with TF: 0.10.2.

I am trying to create routes for multiple routing tables and I am calculating the count based of the number of routing tables in the list

I'm now dealing with this issue, after stumbling through several others and trying workaround after workaround along the way.

I'm passing a list of strings (with a hard-coded number of elements) into a module. I'm interpolating into those strings in the module definition, but not in a way that changes the argument count. I'm trying to use count on the passed array from within the module, and I'm now seeing this issue.

It appears that this issue is much deeper; passing a simple count parameter doesn't actually fix this issue. Something is tainted further down in the data structure, somehow. I'm using a null_data_source to construct two different NDSs, and thereby lists, of the two arguments types that can be passed into the module:

data  "null_data_source"  "rule_splitter"  {
  count   = "${var.cidr_rule_count + var.src_rule_count}"
  inputs  = {
    rule_type = "${element(split(",", element(compact(var.rule_set), count.index)), 0)}"
    rule      = "${element(compact(var.rule_set), count.index)}"
  }
}

data  "null_data_source"  "cidr_rule_set" {
#  count   = "${length(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
#                                data.null_data_source.rule_splitter.*.inputs.rule_type,
#                                list("cidr_block")))}"
  count   = "${var.cidr_rule_count}"
  inputs  = {
    d = "${element(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
                              data.null_data_source.rule_splitter.*.inputs.rule_type,
                              list("cidr_block")), count.index)}"
  }
}

data  "null_data_source"  "sgsrc_rule_set" {
#  count   = "${length(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
#                                data.null_data_source.rule_splitter.*.inputs.rule_type,
#                                list("source_sg")))}"
  count   = "${var.src_rule_count}"
  inputs  = {
    d = "${element(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
                              data.null_data_source.rule_splitter.*.inputs.rule_type,
                              list("source_sg")), count.index)}"
  }
}

If I pass a fixed count into the main NDS "rule_splitter", that one works. But the next failure is at the next NDS trying to construct its count using length(data.nds.rule_splitter.*.inputs.d), which should be a known result based on the original var.rule_count (formerly length(var.rule_set)). When explicitly setting count to length(var.rule_count), it's still breaking downstream (even though the count is known).

Now I've completely fallen down this rabbit hole, and I'm to the point where subsequent list compilation is failing for a plethora of crazy reasons. I'm abandoning this attempt, even though it worked for a while.

I've also just been bitten by this.

use case, prefix lists. I want to restrict access to a S3 endpoint based on not just the prefix list target in a security group, but also a network ACL. I don't think there's any way of knowing the CIDR block count ahead of time so it needs to be calculated.

I want to add a network rule per CIDR block to allow access to S3. I can't think of a way to get a length value without having to use a computed value. Even accessing a prefix list via a data type still doesn't help.

The example in the docs just directly accessing this list value by index, not very helpful if you don't know how many you need to make.

E.g: the S3 prefix list in europe-west-2 returns three disparate CIDR blocks, you need them all too. as the packages domain for the yum repos is bound to the last CIDR block in that list.

https://www.terraform.io/docs/providers/aws/d/prefix_list.html

Can anyone think of a workaround for this use case?

@zoltrain
Check if you can run only a single target , if you can - run it with the target that creates the CIDRs and later run it in full.

I don't understand why this ticket is marked closed. The broken behavior still exists in Terraform 0.11.2

There is another discussion in an open ticket regarding this issue... #12570

The problem I see with the "explicit count variable" workaround, is that it introduces room for human error, such that the var.resource_count does not match count(resouce_list). In this case, terraform may appear to succeed, but actually not deploy the configuration you expected.

To mitigate this concern, I propose...

Workaround 3:

Verify resource_count == count(resource_list)

variable "topic_arns" {
  type = "list"
}
variable "topic_arns_count" {
}

resource "aws_sns_topic_subscription" "event-test" {
  # Explicitly define count (not computed)
  count = "${var.topic_arns_count}"

  topic_arn = "${element(var.topic_arns, count.index)}"
  protocol  = "lambda"
  endpoint  = "arn:aws:lambda:ap-northeast-1:012345678901:function:test-event"
}

# Verify that the count matches the list
resource "null_resource" "verify_list_count" {
  provisioner "local-exec" {
    command = <<SH
if [ ${var.topic_arns_count} -ne ${length(var.topic_arns)} ]; then
  echo "var.topic_arns_count must match the actual length of var.topic_arns";
  exit 1;
fi
SH
  }

  # Rerun this script, if the input values change
  triggers {
    topic_arns_count_computed = "${length(var.topic_arns)}"
    topic_arns_count_provided = "${var.topic_arns_count}"
  }
}

At least this way, you'll get a useful error message, instead of silently-broken behavior.

@eschwartz I appreciate your efforts but I'm pretty sure we're at the point where we will be writing a script to emit a .tf file (with no variables) instead of resorting to trying to use the Terraform language to express this. :-/

The work-arounds provided are terrible from a programming perspective. Really bad form, actually, and they make the use of lists in modules much less powerful than they could be.

To add to the insanity - this will work if your list is already in the state file. Change the list, though, and it throws the error.

As far as I can tell, TF knows the length of every list when it is created, so the calculation of length should not be an issue. Why is it?

@joseph-wortmann I totally agree here. I'm using a data-source to get a simple ip as a string, and add it to a list with hostnames.
But because this is computed it rejects it. If the concern is that a list or map is return and hence length would error then a simple solution is to create a string built-in and ensure that any data computed is wrapped in string function.

I'm in the same boat. I'm trying to use data from an external resource, due to other limitations, and it looks like I'm completely out of luck. At the very least, limitations like this need to be documented heavily to prevent running into a brick wall after spending time on a solution that looks like it should work just fine.

I understand the concern regarding data consistency, but at some point the module developer needs to take responsibility for that. Worst case, scenario, I could see something like an unsafe = true argument to signal that I know that I'm doing something potentially risky and say "please just let me do it".

I just hit this issue in a couple of clouds I manage and it really limits my options. I agree with @joseph-wortmann completely. I'm not comfortable with any of the 3 workarounds proposed here from the point of view of the maintainability of my code.

IMHO, this should be better documented since it can have a heavy impact in the way you design your infrastructure with Terraform and must be taken into account when thinking about how your resources depend on each other.

As @joseph-wortmann mentioned, this works fine when the data is already in the state file, but fails when it's not. This has now bitten me a couple times -- it works fine during iterative development as I slowly add new modules/resources, then fails spectacularly when doing a greenfield deployment.

I'm going to call this a bug because it forces really, really bad coding practices to hack around the problem and this issue should not be marked as closed IMHO.

kalfa commented

Can anyone expand on why a length() (or probably any other function) cannot be called on a computed value?

Isn't it a matter of computing the deps before the interpolation?
As long as there are not cycles, it should be fine. What was the reason for the introduction of this constraint?

Stono commented

This is immensely painful for us.

Just hit this issue my self:

`resource "aws_security_group_rule" "allow_all_bastion" {
  count = "${length(var.security_group_ids)}"
  type            = "ingress"
  from_port       = 0
  to_port         = 0
  protocol        = "-1"
  source_security_group_id = "${element(var.security_group_ids,  count.index)}"
  security_group_id = "${aws_security_group.bastion_ssh_sg.id}"
}`
`  security_group_ids      = ["${module.vpc.blqblq}",
                             "${module.vpc.blqblq2}",
                             "${module.vpc.blqblq3}",
                             "${module.vpc.blqblq4}",
                             "${module.vpc.blqblq4}",
                             "${module.vpc.blqblq5}",]
}`

Terraform v0.11.7
In this version for some reason only works for some attributes refence. For example, it works for this two

  • ${aws_db_instance.mysql.address}
  • ${aws_elasticache_cluster.redis.port}

But not for this one

  • ${aws_elasticache_cluster.redis.address}

ยฏ\(ใƒ„)/ยฏ

None of the workarounds here are practical.

I have a two modules - one to create a vpc, which is called twice, and a second to peer to two vpcs, including adding the appropriate routes; the second is passed the vpc_id, from which I can interpolate the subnets, and from there the route tables. I want to add routes to both tables, and this works when the VPCs exist, but when I'm running the entire state from scratch it doesn't.

As the number of subnets is defined in the same 'plan/apply' operation, terraform does know how many subnets and route tables exist, but yet we still get the error.

@mitchellh stated above that this may be relaxed; do we have any idea on timescales of this as it massively limits the power of modules in its current state.

I've just hit the same issue :(

I hit this issue today as well. Just commenting to vote for adding this support

Versions

$ terraform --version
Terraform v0.11.7
+ provider.aws v1.28.0
+ provider.mysql v1.1.0
+ provider.null v1.0.0
+ provider.random v1.3.1

Description

I have just hit this issue with the following code-base:

  • An abstract layer (module from Terraform perspective), we do internally call stacks -- stack-vpc.

  • The above abstract layer uses another layer of abstraction, we do, internally, call molecules.
    Another module, from Terraform perspective.

  • stack-vpc includes:

    • molecule-vpc. (creates vpc)
    • molecule-subnets (creates subnets in vpc)
    • molecule-route-table (creates route-table and routa-table-association)

As you may have guessed, molecule-subnets module exposes subnet_ids and these subnet_ids are passed to molecule-route-table.

Pretty, the same use-case, like @hafizullah had.

Adding my vote to the pool. Hit that issue as well today

Also bitten by this... Will the terraform 0.12 solve this issue?

hit this today again

I've been having this issue for a few months now... Always resorting to hardcode the intended value.

I'm avoiding creating .tf files with scripts to allow readability from the repo, otherwise you have to document everything elsewhere.

Hitting this issue as well. Im trying to count iam policies that im creating and it fails when starting from scratch. Works fine when the policies are already created though, just like others mentioned above.

Any updates on a fix?

Hi! I'm sorry to everyone running into this problem. We are aware of it, there are several related open issues, and you can follow the main discussions in #14677 and #17421.

I am going to lock this particular issue to consolidate the conversation - please check out the issues I've linked and add your ๐Ÿ‘ to the main comments.