docker_registry_image : Provider produced inconsistent final plan
Closed this issue ยท 16 comments
Community Note
- Please vote on this issue by adding a ๐ reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and docker Provider) Version
Terraform v1.0.5
kreuzwerker/docker v2.15.0
Affected Resource(s)
docker_registry_image
Terraform Configuration Files
resource "docker_registry_image" "this" {
name = "image:${var.REV}"
build {
context = "./"
build_args = {
REV = var.REV
PORT = var.PORT
}
}
}
Debug Output
https://gist.github.com/stevelacy/b807abca095f59486e4587097ab24025
Actual Behaviour
โ Error: Provider produced inconsistent final plan
โ
โ When expanding the plan for module.backend.docker_registry_image.this to
โ include new values learned so far during apply, provider
โ "registry.terraform.io/kreuzwerker/docker" produced an invalid new value
โ for .build[0].context: was
โ cty.StringVal("./:d0bf0244333d694ac286b5de97ead1d28344bae08ff63ddb7e9e50fa87d4cff0"),
โ but now
โ cty.StringVal("./:0765d42dc5f3cc826c5702ec60ec5e09b47b6c5395edb0f65b55ad6f38d76ec8").
โ
โ This is a bug in the provider, which should be reported in the provider's
โ own issue tracker.
Steps to Reproduce
terraform apply
This error goes away on a subsequent build assuming the image hash is the same.
References
Seems releated to #192 except this time the context folder name is ./
This looks related to the issue I made ( #290 ) but I think I've found the cause of the problem, which I've put in the 'Actual Behaviour' section.
So, I found a quick fix for 2.15.0
. It looks like if tfstate
does not exist, then it would fail. Just run terraform apply -auto-approve || terraform apply -auto-approve
, which should fix it.
I have not tested any issues in setting the build context path so far, so I cannot commit to that.
This issue is stale because it has been open 60 days with no activity.
Remove stale
label or comment or this will be closed in 7 days.
If you don't want this issue to be closed, please set the label pinned
.
Still happening
Happening for me. What's interesting to note is that the documentation states that you must use an absolute path but you seemingly don't in that docker_registry_image object up above? Is there any reason for that?
Issue confirmed on a windows machine using aws s3 as a back-end.
This issue is stale because it has been open 60 days with no activity.
Remove stale
label or comment or this will be closed in 7 days.
If you don't want this issue to be closed, please set the label pinned
.
Issue is still happening and not resolved.
Yes, still happens.
Looking at the code, it appends a hash of the context directory's contents. This bit me because I was referencing the directory containing terraform's state file, which changed every time I tried to terraform apply
. Even worse, I was dynamically generating the Dockerfile so even after working around this, the very first apply fails because there's no Dockerfile yet to hash.
Workarounds:
- Put your Dockerfile in a directory without any other files that change when you run terraform.
- If you need to change/template your Dockerfile at build time, use Docker build args.
Thanks for all the comments!
In short: The calculation of the internal state value for build.context
is, nicely put, not optimal.
It is definitely one of my highest priorities to fix.
As I am new to the whole "terraform provider" world, I still need some time digging into the provider code and also learning about the plan/state/resource mapping inside a provider.
My gut feeling tells me that in order to properly fix the "Provider produced inconsistent final plan" issue we would need a major version bump because of breaking changes. But let's see.
Possible workarounds:
- Please refer to the comment above.
- #290 (comment)
With the version to be released (v2.19.0
) it will also finally be possible to have a Dockerfile
outside the build context (#402) maybe that will help some of you out there.
@stevelacy I assume this issue is still happening for you? What contents are in your ./
context?
As close as I could make it to the original:
resource "docker_registry_image" "this" {
name = "name_here"
build {
context = "./server"
}
}
contents of ./server/
Dockerfile
main.go
Pretty sure it was erroring out with the context as ./
or ./server
I "fixed" it by moving the Dockerfile from ./
to ./server
and also renamed the context from ./server
to server
without the leading ./
Thanks a lot! That is very interesting information and could be enough to build a nice reproducible case.
In general, I think one of the main problems is that the provider does not enforce the "absolute path" of the context
(https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/registry_image#context), such as in your case. But maybe that can be fixed with some additional code and path-handling.
Because now enforcing the absolute path will lead to a breaking change ๐ค
Looking at the code, it appends a hash of the context directory's contents. This bit me because I was referencing the directory containing terraform's state file, which changed every time I tried to
terraform apply
. Even worse, I was dynamically generating the Dockerfile so even after working around this, the very first apply fails because there's no Dockerfile yet to hash.Workarounds:
- Put your Dockerfile in a directory without any other files that change when you run terraform.
- If you need to change/template your Dockerfile at build time, use Docker build args.
In my case, any changes to build context would result such failure. Even I have the changes being a dependency of docker_registry_image
, it does not help.
The build
block of docker_registry_image
is now deprecated and will be removed with the next major version. Please migrate to using the build
block of the docker_image
. There were many fixes done to the context
attribute which should fix most of the problems.
There will also be a guide for migrating from v2.x
to v3.x
.
Please open a new issue in case you encounter any bugs and issues!