Plan/apply does not recognise a change in state after triggering manual deployment through Heroku UI/CLI
Closed this issue · 3 comments
Terraform Version
Terraform v0.12.29
Heroku Provider Version
+ provider.heroku v2.6.0
Affected Resource(s)
- heroku_build
Terraform Configuration Files
main.tf
provider "heroku" {
version = "~> 2.0"
}
resource "heroku_app" "example" {
name = "${terraform.workspace}-appname"
region = "eu"
config_vars = merge(local.config-vars, var.more-vars)
}
resource "heroku_build" "example" {
app = heroku_app.example.name
source = {
path = "src/dummy-django-0.0.6.tar.gz"
}
}
resource "heroku_formation" "example" {
app = heroku_app.example.name
type = "web"
quantity = 1
size = "free"
depends_on = [heroku_build.example]
}
variables.tf
locals {
config-vars = {
ALLOWED_HOSTS = "https://${terraform.workspace}-appname.herokuapp.com"
}
}
variable "more-vars" {
type = map(string)
default = {
SECRET_KEY ="SECRET_KEY_FOR_DJANGO"
DEBUG ="True"
}
}
Expected Behavior
Terraform was used to deploy the initial infrastructure. I then ran a manual deployment (for a different code version) from the Heroku web UI (and subsequently the CLI)
I was expecting Terraform to recognise that the Heroku infrastructure had changes from its state, and would want to destroy/replace the affected resources.
Actual Behavior
The output from Terraform plan/apply showed no changes to infrastructure.
Steps to Reproduce
terraform apply
- Trigger a manual deployment through Heroku Web UI or CLI
terraform plan
orapply
Builds are a dynamic, operational aspect of the Heroku platform. The Heroku API doesn't directly provider the kind of absolute correlation between builds & releases that would be required to support this kind of immutable, stateful deployment.
Using Slugs as the "deployment state" may be the solution to your problem. Try running your builds against a separate non-production app, and then in Terraform config explicitly set that resulting slug ID to release to the production app. We have an example that does something like this using slugs, Example: Deploy Heroku apps from pipelines using Terraform
Also, worth noting:
The Steps to Reproduce cause config drift, not a Heroku-specific problem, but a usage pattern that causes conflicts in Terraform state.
The real solution is to exclusively use Terraform, and not circumvent its state-of-the-world by interacting directly with the underlying cloud resources.
Thanks for responding.
I know my steps to reproduce cause config drift - I was expecting Terraform to recognise the drift and revert the infrastructure back to its known state. Especially as the Build IDs no longer match.
I'll investigate using slug IDs and pipelines.