heroku/terraform-provider-heroku

feature request: formation batch updates

Closed this issue · 4 comments

There's support in the Platform API to do Formation Batch Update and I can see that it is implemented in the Go client.

Could be useful if you have a complex setup, right?

mars commented

Batch API updates are not generally applicable to Terraform providers, because in Terraform each resource is managed on its own, using self-contained lifecycle logic and API calls. Terraform does not provide a strategy for merging resource updates together into composite API calls.

The reason I was asking for this is that I want to avoid this

╷
│ Error: Patch "https://api.heroku.com/apps/0c0b1941-7aef-48b3-b52e-863051aa642f/formation/worker": Cannot use the following dyno types in the same app: Basic and Standard-1X
│
│   with module.worker.heroku_formation.formation["0"],
│   on create_app/create_app.tf line 34, in resource "heroku_formation" "formation":
│   34: resource "heroku_formation" "formation" {
│
╵

It happens because https://github.com/heroku/heroku-buildpack-ruby automatically adds a Basic web dyno to my app even when I haven't asked for it (because I've declared a certain dependency in my project).

I have a repro of what I'm doing at https://github.com/dentarg/gists/tree/2f3f3e5ed324aebd262ae2cd67fff1280dacb1bf/gists/terraform-heroku_formation

I would need to make one heroku_formation to scale down web and after that scale my worker dyno. Having the batch formation would be much easier way of doing that.

mars commented

Thanks for the additional explanation @dentarg. It's definitely easier to respond to issues when they have details like this.

Restating why this issue is closed:
Terraform does not support batching of API calls between/across resources.

To solve this following your suggestion, a whole new heroku_formations (plural) resource would need to be implemented, to use that batch API from a single resource, enabling something like this:

resource "heroku_formations" "example" {
  app_id = heroku_app.example.id

  formation {
    type     = "web"
    size     = "standard-1x"
    quantity = 0
  }

  formation {
    type     = "worker"
    size     = "standard-1x"
    quantity = 1
  }
}

As a maintainer, I'm open to such an addition, if it actually solves a problem… but I really believe this problem is already solvable using the current provider.

would need to make one heroku_formation to scale down web and after that scale my worker dyno

Terraform is designed to string together resource dependencies like this. It will likely work better than creating/implementing a new heroku_formations resource to batch the formation changes. Example:

resource "heroku_formation" "example_web_down" {
  app_id   = heroku_app.example.id
  type     = "web"
  size     = "standard-1x"
  quantity = 0
}

resource "heroku_formation" "example_worker_up" {
  app_id   = heroku_app.example.id
  type     = "worker"
  size     = "standard-1x"
  quantity = 1

  depends_on = [
    heroku_formation.example_web_down
  ]
}

If you find that heroku_formation resources cannot solve this problem, and you validate that formation batch updates do solve it, then please revisit this issue with us.

Yeah, I know I can use depends_on, I just haven't structured my code that way right now. (from the repro I linked)

I thought I could specify the complete formation of an app like this

  formation = [
    {
      quantity = 0
      size = "Standard-1X"
      type = "web"
    },
    {
      quantity = 1
      size = "Standard-1X"
      type = "worker"
    },
  ]

and then iterate over that with heroku_formation like this

resource "heroku_formation" "formation" {
  for_each   = {for i, v in var.formation:  i => v}

  app_id     = heroku_app.app.id
  quantity   = each.value.quantity
  size       = each.value.size
  type       = each.value.type
  depends_on = [heroku_build.build]
}

but the auto-added Basic web dyno made that not work reliable

Hopefully https://github.com/heroku/heroku-buildpack-ruby can be changed to be less magic if the user wants, talking to @schneems about it