Cannot apply CRD and a CR using it in the same plan/apply due to SSA
heschlie opened this issue Β· 62 comments
Terraform version, Kubernetes provider version and Kubernetes version
Terraform version: 0.14.11
Kubernetes Provider version: 2.4.1
Kubernetes version: EKS 1.17
Terraform configuration
There is a bit going on here, but essentially this is the output from the terraform_flux_provder, and through some HCL abuse I'm massaging it into the right format.
resource "kubernetes_manifest" "install" {
for_each = { for manifest in local.install_manifest : join("-", [manifest.kind, manifest.metadata.name]) => manifest }
depends_on = [kubernetes_namespace.flux_system]
manifest = each.value
}
resource "kubernetes_manifest" "sync" {
for_each = { for manifest in local.sync_manifest : join("-", [manifest.kind, manifest.metadata.name]) => manifest }
depends_on = [kubernetes_manifest.install]
manifest = each.value
}
Question
Essentially I am using the kubernetes_manifest
resource, and am trying to:
- Deploy some custom resource definitions
- Deploy some custom resources using the above definitions
Upon doing this I am greeted with an error during the plan because the CRDs have not been created and SSA is not happy about it:
Acquiring state lock. This may take a few moments...
Error: Failed to determine GroupVersionResource for manifest
on main.tf line 49, in resource "kubernetes_manifest" "sync":
49: resource "kubernetes_manifest" "sync" {
no matches for kind "Kustomization" in group "kustomize.toolkit.fluxcd.io"
Error: Failed to determine GroupVersionResource for manifest
on main.tf line 49, in resource "kubernetes_manifest" "sync":
49: resource "kubernetes_manifest" "sync" {
no matches for kind "GitRepository" in group "source.toolkit.fluxcd.io"
Releasing state lock. This may take a few moments...
ERRO[0105] Hit multiple errors:
Hit multiple errors:
exit status 1
Is there a way to tell the provider that things are ok, and not try to plan this? It seems like a bug or required feature before this comes out of experimental, as asking for someone to first apply the CRDs, then add and apply the CRs doesn't seem like a valid long term solution.
I came to ask the same thing, as I ran into this trying to migrate a bunch of things from Helm charts to straight Terraform. There are a lot of similar issues on the old repo:
hashicorp/terraform-provider-kubernetes-alpha#247
hashicorp/terraform-provider-kubernetes-alpha#218
hashicorp/terraform-provider-kubernetes-alpha#235
This is going to make adoption of kubernetes_manifest
very spotty. It'll basically require a separate Terraform step to plan and apply any CRDs ahead of using those CRDs. This will be further complicated by using something like Operator Lifecycle Manager, where CRDs are installed as part of the operator, and several operators might have dependent CRDs. For example, my current dependency chain looks like this:
cert-manager
-> azure service operator
-> grafana
(uses a mysql database/user deployed through azure service operator).
cert-manager
is installed through OLM, and created CRDs which must be used to generate resources ahead of installing azure service operator
, which then creates CRDs that are used to create resources ahead of installing grafana
. My plan was to utilize depends_on
to handle the dependency chain, but because of the issue above (and in the linked old issues), that's not working. It would require separating each of those into a separate Terraform plan/apply step and then performing them in order, which will become cumbersome fast.
My current workaround is to place CRD usage into a local Helm chart and use that through helm_release
. It's very hacky, but Terraform doesn't seem to evaluate those during the plan stage (at least not in a breaking way if the CRDs aren't installed yet), so it's a usable workaround. It would be a lot cleaner to be able to just use kubernetes_manifest
though.
So as far as I understand, it wouldn't even work when separating out these 20k lines of yaml from cert manager into individual files so they could be used as manifest. Because the moment we run apply, it does this server-side check and will report that the CRD does not exist. If we had at least an exclude flag, doing 2 times apply wouldnt be so bad, but without that the first apply will the run in the second apply again if we target only that crd the first time around.
I am using a Makefile now. Its not that nice but at least I can spin everything up with one command.
For anyone having the same issue:
I wanted to install cert-manager using Helm and deploy a ClusterIssuer
manifest in the same terraform apply
step. The ClusterIssuer
obviously depends on the cert-manager.io/v1
apiVersion, therefore the plan failed due to SSA.
I'm now using the kubectl provider for the ClusterIssuer Resource and combined with a depends_on
meta argument on my cert-manager Helm release the plan and apply went through :)
I had hoped that the manifest
resource being promoted out of alpha meant this problem had been resolved. Unfortunately, I've just hit it again with argocd
. It would be great if someone from HashiCorp could comment on any plans to resolve this long-standing issue.
It might be a slightly different reason than the original hashicorp/terraform-provider-kubernetes-alpha#72 but the end result is the same - you can't use the manifest
resource to deploy anything which depends on a CRD which greatly limits it's usefulness.
This is a deep issue, because any time you want to reference a CRD, you need to split your terraform project into something that can be applied separately. eg using Traefik with CRDs means I need a plan to install Traefik first, then another to set up all my services.
I don't know what the solution would be here, I'd guess either terraform needs the ability to run a multi-stage plan, or have much better support for splitting a project into pieces.
For anyone having the same issue:
I wanted to install cert-manager using Helm and deploy a
ClusterIssuer
manifest in the sameterraform apply
step. TheClusterIssuer
obviously depends on thecert-manager.io/v1
apiVersion, therefore the plan failed due to SSA.I'm now using the kubectl provider for the ClusterIssuer Resource and combined with a
depends_on
meta argument on my cert-manager Helm release the plan and apply went through :)
I have the same problem when deploying a cert manager. two resources: helm and Cluster Issuer, and WTF???
β·
β Error: Failed to determine GroupVersionResource for manifest
β
β with kubernetes_manifest.cluster_issuer,
β on main.tf line 50, in resource "kubernetes_manifest" "cluster_issuer":
β 50: resource "kubernetes_manifest" "cluster_issuer" {
β
β cannot select exact GV from REST mapper
β΅
ERRO[0011] 1 error occurred:
* exit status 1
We need fix! Very need.
@alexsomesan sorry for pinging, but have you seen this issue?
This issue alone renders the manifest resource entirely useless IMHO. There is no good way - for now - to apply CRD's an isolated way, so one can e.g. deploy CRDs of all the helm charts like traefik, chunkydata, minio-operator ... and a lot more (since CRDs are now used everywhere).
One has to either pull all the CRDs from the projects helm/kustomize charts manually somekind of CRD-only resources (cumbersome and fragile task, esp. maintaining it), then apply those in an extra, isolated terraform run and then start using the manifest resource to create CRs.
It really reminds me of the 'typescript typings' issue back in the days. Now we probably will see all the projects grind out the CRDs in custom, isolated helm charts/kustomize repositories. Then (maybe) a common standard will follow so that CRDs can be 'looked up' by kubectl automagically (and installed) .. and before that terraform kubernetes manifest will rely on this 2-step process of installing CRDs first.
This is by no means the fault of terraform / kubernetes manifest here, it is what happens when you introduce strong typings late into game (looks at kubernetes). ConfigMaps (no typing) was the way to go, then we got CRDs and now we have the issue of 'typings first, then declaration of instances'.
The same issue/mistake was done with typescript / typings - or maybe it's just the nature those huge things evolve - step by step, with a lot of painful intermediate steps in between.
That said, considering the long road until those CRDs can be installed an isolated way, this resource is basically a playing ground IMHO. I'am not sure it makes sense to maintain this huge (and nice) little thing here until then - but of course that is not up to me to judge.
We would love to use manifests so much! But i guess, we need to wait for the helm/kustomize/kubernetes projects to make up their minds about CRDs and 'typings' first.
One possible solution in case of argocd (for those who end up here through google) is to use terraform's target option.
This is how I am doing atm with 4 stage eks cluster bootstrapping
RuntimeArgoCDPlan:
stage: runtime_argocd_plan
extends:
- .runtimeJob
# needs:
# - ClusterApply
script:
- gitlab-terraform init
- gitlab-terraform validate
- gitlab-terraform plan -target=module.runtime.helm_release.argocd
- gitlab-terraform plan-json -target=module.runtime.helm_release.argocd
artifacts:
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
RuntimeArgoCDApply:
stage: runtime_argocd_apply
extends:
- .runtimeJob
needs:
- RuntimeArgoCDPlan
script:
- gitlab-terraform init
- gitlab-terraform apply -target=module.runtime.helm_release.argocd
when: manual
RuntimePlan:
stage: runtime_plan
extends:
- .runtimeJob
needs:
- RuntimeArgoCDApply
script:
- gitlab-terraform init
- gitlab-terraform validate
- gitlab-terraform plan
- gitlab-terraform plan-json
artifacts:
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
RuntimeApply:
stage: runtime_apply
extends:
- .runtimeJob
needs:
- RuntimePlan
script:
- gitlab-terraform init
- gitlab-terraform apply
when: manual
basically, on the first run you install only argocd helm chart, and then everything else is being deployed through argo manifests (which support unknown crd if they are in the helm chart).
Since this solution uses --target, it's not necessary split the project && state.
Just going to add the Kustomization provider doesn't have this problem either: https://registry.terraform.io/providers/kbst/kustomization/latest/docs
You can deploy the CRDs using Helm or whatever and then the CRs themselves by using a Kustomization that depends on the Helm release.
Would also like to be able to do this directly with a kubernetes_manifest. Would be so much cleaner.
kubectl
provider seems to work also for this use cas. I revert to it here instead of kubernetes_manifest
as it was not able to deploy because CRDs are being created by the dependencies.
I can't believe I'm going to have to add a totally separate provider just to do manifests.
Ok, so this is currently the 3rd most upvoted issue of this provider: https://github.com/hashicorp/terraform-provider-kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
The main perceived problem for this issue seems to be Terraform's "limitation" that everything needs to be planned in advance. However, technically, this doesn't seem to be a blocker for this issue. This should be proven by the fact that other providers do reportedly already enable this common use case (e.g., kbst/kustomization and gavinbunney/kubectl).
The actual limitation here seems to be that this providers wants to validate CRs during the planning phase (i.e., a design choice by this provider). This validation is obviously only possible if the CRD already exists. Therefore, I propose the following solution: We add a validate
(better names are welcomed) argument for the kubernetes_manifest
resource. This argument would default to true
in which case the behavior remains exactly as is. However, if the user chooses to set validate = false
, the provider would first check if the resource already exists and, if this isn't the case, the planning stage would simply conclude that the resource needs to be created and not do any further validation (i.e., the user is responsible for the correctness of the provided data and specifying the correct dependencies so that the resource always gets created after the CRD - this should be documented and otherwise it will simply fail during the apply phase). This should also be the only behavior that needs to be altered. If the CR already exits then the CRD also already has to exist. There might be some additional things to consider when implementing this (I'm not yet familiar with the code, unfortunately) but I don't think it should make this provider worse / cause additional limitations (especially since the current design already has quite a few problems/limitations like that destroying CRDs will also destroy the associated CRs (which this provider cannot track/handle properly), that the CRD could be destroyed/altered between the plan and apply phases, that the providers fails if the CR already exists on K8s but not in Terraform's state file, that dependency tracking between CRDs and CRs is left to the user, and there are likely even more issues/limitations).
What do you think of this proposal/idea? Does this seem like a good idea or did I miss anything relevant (like new issues this validation bypass could create, techical limitations, etc.)? It would be great if we could find a way to finally resolve this issue/limitation as this seems to be quite a major limitation of this provider and something that should be avoidable. Feedback is welcome :)
Hi @primeos-work!
Thanks for taking the time to analyse this problem and make such detailed suggestions!
In order to move this conversation further in a constructive way, please let me make some corrections to your assumptions about how the provider is implemented and why.
The reason why the provider needs access to the CRD during planning is not for validation purposes (in fact, validation is still a missing feature in the manifest resource), but rather because the provider needs to establish the structure (aka schema) of the resource so that Terraform can correctly persist the state for it. Terraform's state storage and the provider protocol are strongly typed. The entire structure of a resource, including the types of all its attributes, needs to be known to Terraform from the creation of the resource and Terraform will assume it stays constant once it's been defined.
The only way we can fully describe a CR resource to Terraform at creation time is by looking up it's schema details in its parent CRD. Unless we do this, all the structural information we would have about the CR is the set of attributes the user included in the configuration. This is not a full description of the resource, but rather a subset of it. The minute the user then updates the resource adding a new, previously unspecified attribute value, Terraform will fail and report inconsistent state structure. This will lead to corrupt state storage and will make any update operations on the CR resource impossible.
Because of the above reasons, we cannot avoid requesting the CRD structure during planning.
Hey @alexsomesan, thanks a lot for your fast and detailed response with the corrections!
My bad, I already feared it couldn't be quite that simple, but let's try to find a new way then.
because the provider needs to establish the structure (aka schema) of the resource so that Terraform can correctly persist the state for it
Does this step necessarily have to happen during the planning stage or would it, e.g., be possible to use only the set of attributes the user included in the configuration for the plan, if the CRD lookup fails, and then perform the CRD lookup during the apply stage (when creating the CR)? That way, the final state would be correct and only the plan should change. For correctness, the apply could even fail if the CR already exists during the apply stage as it was manually/externally created since the planning (IIRC this is even already the current behavior). So from a theoretical standpoint this approach should mostly keep the correctness of applying the plan (the only issue I see is if the CRD changes between the plan and apply stages - I'll try to test the current behavior later - this might be acceptable though because the CRD is versioned and there shouldn't occur any breaking changes anyway). Would this be something that could be implemented or are there additional technical challenges/restrictions (e.g., the mentioned provider protocol - I'm not sure if the creation you mentioned refers to the creation of the full description/specification/structure during the planning or the creation of the actually object during the application of the plan)?
@primeos-work, unfortunately it's still not that simple (as you realised yourself).
Let's first consider what is the point of having a "plan" step in Terraform. It is to present the user with an accurate preview of exactly what is going to be performed during "apply". This is to give the user a chance to vet the proposed changes and confirm or abort before anything is touched. Once a "plan" is confirmed by the user, "apply" will only perform exactly what was "promised" during the plan, not more not less. That means we establish a trust contract between Terraform and the user. This is a major differentiator of Terraform against other tools that might at first glance seem "simpler". I can appreciate that maybe this value isn't immediately recognised by new Terraform users, but that is a whole other conversation.
Because the plan is a "promise" that should not be broken during apply, Terraform enforces consistency checks on values and types between the plan and the apply result. One of these checks is for schema types of resources and attributes to match between plan and apply. Missing or extra attributes between plan and apply will fail this check and in turn the whole operation fails. This is why there is no other option but to fully construct and "promise" the type of the resource during planning. Doing it later will be rejected by Terraform's consistency checks.
Terraform isn't trying to be unreasonably pedantic here. There is a very solid point about keeping types consistent like this. Consider some resource depending on a kubernetes_manifest
resource. You might want to interpolate the value of one of the manifest attributes into that other resource's configuration. During planning, Terraform will decide on the ordering of operations on these resources as well as the values displayed for attributes of dependent resource based on the availability of a value for the upstream manifest attributes. But if it doesn't know if there even is an attribute there, how can it make any promises about the structure of the dependent resource? Without type checking, it might expect a string value and instead get a set of booleans at apply time. This would render the whole promise of the plan use-less and thus the whole point (and effort) of having a planning mechanism in Terraform.
In conclusion, if the planning phase is of little value to one's use-case then there are alternative providers that decided to opt out of offering these guarantees (a few mentioned above in this thread). These providers just treat the resources as blocks of unstructured text (from Terraform's POV) that they just throw at the API. While they do provide a seemingly simpler first-use experience, they will soon fall short when it comes to combining the resources in more complex configurations.
There is no point in us creating yet another provider with the same limitations. Instead, this provider tries to offer as much of Terraform's value propositions as possible and that includes properly following the plan-apply workflow. The solution for the problem we are discussing here is best solved in Terraform itself, in order to preserve all the guarantees Terraform is designed to offer. We are having ongoing conversations with the Terraform team about approaches to it and they are experimenting with various solutions. Once they settle on an approach, we will be discussing it with the wider audience.
Thanks again for all of your insights @alexsomesan!
If I may share my thoughts (from a user's perspective) on Terraform's planning and state features / design decisions (a bit out of scope here but I feel like it's relevant - then again it shouldn't be something new): I do see the value of Terraformβs planning and it does indeed provide many great advantages (that I like/appreciate as a user). However, it obviously also comes with many drawbacks that can be quite a PITA (if I may say so), especially when first working with Terraform. But as you already mentioned it can be a huge benefit in the long run. Overall, Iβm quite split on this design decision. I do find it nice/useful but its strictness also often seems to limit practicability (or at least comfort / ease of use; in addition, it would be nice if diverged state could be imported semi-automatically / detected better, etc.). Terraformβs strict and complete state tracking especially seems a bit out of place or at least redundant in cases like K8s where the complete, declarative state is already available (sure, it still offers advantages / additional features but I'd say in such cases it provides fewer advantages and more drawbacks than in other cases). This will also result in multiple issues/annoyances if the K8s clusterβs state diverges from what Terraform thinks it still is but this divergence should obviously be avoided by the user in the first place. And even with all of the nice planning features you can still do things like using the kubernetes_manifest
resource to change a CRD in a breaking way (or even deleting it) which can cause lots of issues for CRs (other kubernetes_manifest
resources) that depend on this CRD (e.g., causing Terraform to fail later in the apply operation or in the next plan/apply operation without any additional changes). This can rightfully be considered misuse but in a perfect world without misuse Terraform could be much simpler (e.g., I love typing but it's kinda only there to prevent misuse).
-> Anyway, what Iβm trying to say is that it might be nice if Terraform could be a bit less strict in some places (in the sense of βperfect is the enemy of goodβ) but thatβs obviously also a very dangerous thing to do and needs to be considered very carefully (to avoid making things worse / opening Pandora's box).
Regarding the motivation of the current behavior/design (dependencies, variable interpolation, promises, etc.):
But if it doesn't know if there even is an attribute there, how can it make any promises about the structure of the dependent resource?
In that case it shouldnβt (and I'd say that's fine). IMO the normal behavior of kubernetes_manifest
should remain unchanged and there should either be an additional argument (like the previously proposed validate
argument that needs a better name) or another resource for this special βmodeβ/hack where the CRD cannot be looked up. In that case the resource would either not output anything (which would be the easiest and should be fine for users) or it would only output the attributes specified in the Terraform configuration (which I wouldnβt suggest as it comes with a lot of challenges (having to infer the types and trying to validate them once the CRD is available) and one should be able to use Terraform variables, data sources, etc. instead). This would trade one feature for another but I think it should be fine for most if not all use cases here (the main use case that I see would be to depend on a default value from the CRD that isnβt part of the Terraform configuration but in practice there should always be pretty acceptable workarounds like making that value part of the Terraform configuration and setting that attribute explicitly).
This would render the whole promise of the plan use-less and thus the whole point (and effort) of having a planning mechanism in Terraform.
So with an implementation as described above this problem should be avoided (the plan wouldn't be complete but this is already the case as Terraform cannot know everything in advance anyway) and only some features of kubernetes_manifest
would be limited. Overall, Iβd still consider it a significant improvement as it should be better than the current alternatives (using unverified 3rd-party providers with less elegant approaches, using the -target option, splitting the Terraform configuration up into multiple root modules (so that the CRDs are created before the CRs), or using even worse hacks (I've seen some that likely shouldn't even be shared)).
In conclusion, if the planning phase is of little value to one's use-case then there are alternative providers that decided to opt out of offering these guarantees (a few mentioned above in this thread). These providers just treat the resources as blocks of unstructured text (from Terraform's POV) that they just throw at the API. While they do provide a seemingly simpler first-use experience, they will soon fall short when it comes to combining the resources in more complex configurations.
I agree with all of that. Tbh itβd be nice though to have a more official one (in terms of reviews/verifications, maintenance / community support, etc.) β but that is of course out-of-scope for this issue (and a challenge in general).
The solution for the problem we are discussing here is best solved in Terraform itself, in order to preserve all the guarantees Terraform is designed to offer. We are having ongoing conversations with the Terraform team about approaches to it and they are experimenting with various solutions. Once they settle on an approach, we will be discussing it with the wider audience.
That sounds very interesting! Thanks a lot for looking into this :) Iβm excited to learn more about this so, if possible, please do share a link here once that discussion is public (unless it'll be discussed in this issue anyway).
I hope my thoughts/suggestions here (from a usersβ perspective) do provide some value and that some parts of it might be considered. I wish I could be of more help but unfortunately my knowledge of Terraform, the code/implementation, design decisions, data structures, protocols, etc. is still by far too limited for more concrete and reasonable ideas.
In my case, I'm using the helm provider to install fluxcd and trying to make a kubernetes_manifest with a GitRepository object.
Simply adding a skip_kind_check = true
would be enough to avoid this problem.
@protosam this option skip_kind_check
don't not exist in kubernetes_manifest
, it's in kubectl_manifest
https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/kubectl_manifest#validate_schema
@generalovmaksim The hope would be to add one to kubernetes_manifest
. Since there is currently not one. This has bitten me quite a few times as well.
I think what Terraform does by checking the kind type during the planning stage still makes sense to me. However, I'm also a bit bothered by having to split my terraform workspace into 2 to have CertManager provisioned before all the CRD issuer resources depend on it.
Ideally, it would be great if the CRD definition and CRD implementation resources can sit inside the same Terraform workspace in harmony without sacrificing type safety check.
@generalovmaksim my comment is a suggestion to add an input of skip_kind_check
to kubernetes_manifest
.
kubernetes_manifest
validates group, kind, version without a way to override the behavoir. This is a poor assumption about how to interact with Kubernetes clusters out of the box because Kubernetes is designed to be extended with custom resources.
Ideally, it would be great if the CRD definition and CRD implementation resources can sit inside the same Terraform workspace in harmony without sacrificing type safety check.
Part of the problem here is that the ideas behind how Terraform was build on a premise of being predictable, to make a calculation beforehand and ensure it reliable is built.
Kubernetes is fundamentally different from Terraform. The idea of describing a state and eventually reconciling that state flies directly against the fundamental assumptions that Terraform wants to uphold. Reconciling a state is not linear A->B calculations and it's going to be different depending on the controller for the resource in question.
If Terraform is to effectively have interoperability with Kubernetes, the provider must provide mechanisms to break predictability.
break predictability.
But that's the whole point of terraform: what you see in the plan, is what you get in the actual final apply, and when you want to verify that this is the case still, you plan again and expect no diff.
If there is a diff, it means that something changed in the remote resource state, and needs to either be aligned in code, or be planned and applied again.
If there is an unpredictable diff, then one can specify a lifecycle rule to ignore certain attributes. So if some cluster state shouldn't be controlled by terraform, but it should be reconciled only via its controller manager, then there should be a mechanism to provide this capability.
Feel like you missed the part where Kubernetes is fundamentally different. If you plan to use kubernetes for what it was built for, this provider's going to suck.
The helm provider is a bit better. You can have it kickstart your in-cluster resources.
- Kubernetes reconciles states.
- Terraform manages defined states.
These are competing ideas.
Quite surprisingly this issue is labelled "question" only. Makes provider half-useable π
This is really problematic. Vote it up to fix. I"m using kubernetes_manifest. Trying to create a module. Tempted just to use kubectl diff and small bash functions in pipeline but would prefer to use terraform for existing platform.
Here is a quick code example if it is useful to anyone to throw into the main.sh that is using helm-diff plugin to run before your terraform code.
You could also install all your helm repos, releases in a different terraform folder using different backend and then cd into terraform folder that is creating the external-secrets based off your CRDS that were created by helm upgrade/install
#!/bin/bash
set -e
[[ $(helm repo list | grep ^external-secrets) ]] \
|| helm repo add external-secrets https://charts.external-secrets.io
[[ $(helm plugin list | grep ^diff) ]] \
|| helm plugin install https://github.com/databus23/helm-diff
helm diff version
kubectl create namespace external-secrets --dry-run=client -o yaml | kubectl apply -f -
helm_cmd(){
helm $1 upgrade --install external-secrets \
external-secrets/external-secrets \
-n external-secrets \
--set installCRDs=true
}
lines=$(helm_cmd diff)
if [[ $(echo -n "$lines" | wc -l) > 0 ]]; then
echo "$lines" | grep "^+\|^-" || true
echo "Helm changes detected."
echo "Running helm upgrade in 5."; sleep 5
helm_cmd
fi
How about introducing delays and long running async processes in applying process?
Terraform 2.0
Example1: Kubernetes Operator creates infrastructure on which next infrastructure branch depends. Then some refresh + apply can happen.
Example2: Dedicated Servers hosting with their Provider adds 20 servers in a month. Then automatically they could get their planned infrastructure inside of them. In big clouds you get that immediately. In hosting centers - after purchase and installation happened.
Terraform 2.0 is noSQL based, with a Server.
With Rollbacks, Backups, Automated Tests...
I've found a solution to have the only manifest, but it should be run twice:
resource "helm_release" "cert_manager" {
name = "cert-manager"
chart = "cert-manager"
repository = "https://charts.jetstack.io"
namespace = "infra"
...
}
data "kubernetes_resources" "cert_manager" {
api_version = "apps/v1"
kind = "Deployment"
namespace = "infra"
field_selector = "metadata.name=cert-manager"
}
resource "kubernetes_manifest" "clusterissuer_selfsigned" {
count = length(data.kubernetes_resources.cert_manager.objects) > 0 ? 1 : 0
...
depends_on = [helm_release.cert_manager]
}
@EBCEEB Thanks for trying to find a workaround!
Did you manage to get around this?
When trying it with the same setup (cert-manager + clusterissuer manifest) I get:
β Error: Invalid count argument
β
β on cert_manager.tf line 54, in resource "kubernetes_manifest" "cert-manager_clusterissuer_letsencrypt-prd":
β 54: count = length(data.kubernetes_resources.cert_manager.objects) > 0 ? 1 : 0
β
β The "count" value depends on resource attributes that cannot be determined
β until apply, so Terraform cannot predict how many instances will be
β created. To work around this, use the -target argument to first apply only
β the resources that the count depends on.
@EBCEEB Thanks for trying to find a workaround!
Did you manage to get around this?
Sorry. I was having previously deployed k8s cluster, so I didn't face such issue.
On the clean infra terraform even can't configure the kubernetes provider:
β cannot create discovery client: no client config
Looks like my solution is not a solution :-(
the solution is to write a helm chart for the CRDs
@simonfelding Agreed. I recently migrated from the kubectl provider to local Helm charts.
Somebody join that hashicorp company and team that responsible for terraform and lead the changes to handle this case.
After such issues you are literally lost trust in humanity. And this is critical, if aliens came up or AI when this issue can be one of a reason why they will not make any deal with humanity.
Humans let's be more responsive, hashicorp what do you think? Are you able to provide valid solution?
To save somehow humanity, here is workaround in "ok" range: #1380 (comment)
Although for me I was using more fresh helm chart https://artifacthub.io/packages/helm/main/raw
I like this more from all workarounds that I found in many related issues...
I ran into this issue when deploying the Grafana Agent Operator using a combination of Terraform and Helm. The issue I faced is that Helm needed to install the CRDs before I could reference them in the manifest provided by Grafana. Then I would get errors like no matches for kind "GrafanaAgent" in group "monitoring.grafana.com"
.
To solve this, I kept the resources that referenced the CRDs in a separate manifest and then applied the manifest using the kubectl_file_documents
data source and kubernetes_manifest
resource, referenced here. Make sure to add a depends_on
in your kubernetes_manifest
resource to ensure it runs the Helm install first.
Perhaps slightly off-topic, but I thought I would share my experiences of using the kubectl provider and give the reasons why I migrated to using local Helm charts in place of it. It's worth noting that the situation is quite different when you are using Terraform to generate YAML than when using static YAML files...
First, I had to activate the wait
option on my kubectl_manifest
resources because terraform destroy
for those resources was returning too early. This could lead to a situation where other resources required to support webhooks involved in the destruction of the implied Kubernetes objects were removed too soon. Not the end of the world.
Next, frequent "inconsistent final plan" errors from the provider forced me to activate the force_new
option on the kubectl_manifest
resources. After this change, I saw many cases where terraform apply
apparently succeeded but the final state in Kubernetes was that several expected resources were missing. They were recreated after running terraform apply
for a second time. So, the deployment contract was essentially broken.
I then noticed that issues in the kubectl provider GitHub repo were starting to pile up and there was no update since last year. This situation is ongoing.
One disadvantage of the local Helm chart approach is that Terraform has no visibility of the Kubernetes changes implied by any Helm chart. So, it's possible that Kubernetes changes made outside of Terraform will not cause Terraform diffs.
@primeos-work, unfortunately it's still not that simple (as you realised yourself).
Would you consider a solution where use supplies CRD (or url of) in an attribute, so kubernetes_manifest
resource can infer schema from spec from it, instead of consulting with running cluster?
Let's say I want to provision ClusterSecretStore
kind manifest, that would look like:
resource "kubernetes_manifest" "cluster-secret-store" {
# This is used by provider to fetch resource schema, avoiding getting it from the cluster
crd_url = "https://raw.githubusercontent.com/external-secrets/external-secrets/main/config/crds/bases/external-secrets.io_secretstores.yaml"
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ClusterSecretStore"
metadata = {
name = "aws-secrets-manager"
}
spec = {
provider = {
aws = {
service = "SecretsManager"
region = "us-east-1"
}
}
}
}
}
I ran into this issue when deploying the Grafana Agent Operator using a combination of Terraform and Helm. The issue I faced is that Helm needed to install the CRDs before I could reference them in the manifest provided by Grafana. Then I would get errors like
no matches for kind "GrafanaAgent" in group "monitoring.grafana.com"
.To solve this, I kept the resources that referenced the CRDs in a separate manifest and then applied the manifest using the
kubectl_file_documents
data source andkubernetes_manifest
resource, referenced here. Make sure to add adepends_on
in yourkubernetes_manifest
resource to ensure it runs the Helm install first.
Thank you for mentioning this. I was stuck and this solves my problem.
rancher(cert-manager) and datadog custom metrics are both impacted by this as well
If the hashicorp team does not want to solve it consider removing this resource from a provider as it's not really useable for most kubernetes use cases. CRDs are nowadays almost everywhere, it's not esoteric occasional use case as it used to be years ago. At least add a big CAPS warning at the top of the documentation it's not useable for most use cases and you don't plan to make it so.
I don't mind enforcing terraform philosophy, but in this case it's against kubernetes philosophy so I am a bit angry I wasted time in this rabbit hole instead of going with hacky bash in null_resource or gavinbunney provider which I don't like but at least those always worked with CRDs.
The target workaround works perfect. Thanks.
I really think it is an acceptable workaround for this cases. I understand the problem, and that as it is now will take a lot of work to refactor.
But I really think the schema is not imprescindible in the case of non-existing resources. The promise to "create a new object" when applying should be enough, right. The information of what values will be used on plan time is not that usefull in the case of a non existing object.
My typical workaround is to just make a local helm chart directory with the CR I need to install inside of that - then I install it via terraform with a helm_release with the chart pointing to the local directory.
I'd much rather the kubernetes provider could deal with this in a more sane way, but this does the job in the least horrible way for me.
I hit this today when trying to replace using the kubectl
provider in this example from the fluxcd documentation. The target workaround does work but that is going to introduce a manual step into my cluster vending machine workflow that I am building so that is less than ideal.
I like the look of this idea but I would really like to have the option to just have the resource not validate during the plan because in this case the CRs are created as a part of the bootstrap resource in the fluxcd provider so by using a depends_on terraform doesn't create those resources until the bootstrap is done anyway so they succeed when it applies them.
This is an example:
resource "flux_bootstrap_git" "this" {
path = "cluster/flux-system"
disable_secret_creation = true
version = var.flux_version
secret_name = local.flux_operator_secret_name
}
resource "kubernetes_manifest" "fleet_config_gitsource" {
depends_on = [flux_bootstrap_git.this]
manifest = yamldecode(local.fleet_config_gitsource_manifest)
}
May have been said already but another potential way of solving this would be to:
- Allow a resource to dictate that it
provides
X CRD. - Then have terraform just trust the
provides
tag until apply phase. - If during apply, after applying the provider, the resource does not exist, then fail.
Does this make sense?
any update on this issue?
The kubectl provider has been abandoned and there's an existing issue where non-namespaced resources fail to get created. This prevents one of the primary workarounds to the issue documented here.
Is it possible to get an update on if this is planning on being addressed?
The kubectl provider has been abandoned and there's an existing issue where non-namespaced resources fail to get created. This prevents one of the primary workarounds to the issue documented here.
Is it possible to get an update on if this is planning on being addressed?
@jmgilman check https://github.com/alekc/terraform-provider-kubectl, I haven't used it, but someone mentioned it in an issue of the other repository
Are there any open PRs that adress this issue? It looks like the validation should treat missing CRD as if the resource itself was missing, as in k8s its not possible to have a resource created with a missing CRD (iirc).
Any update on this at all? It means we're having to put everything in seperate stacks.
this sent me back to pulumi π
@alexsomesan could the kubernetes
provider include a second resource (lets just call it kubernetes_manifest_functional
) that does not make all the plan promise guarantees you describe but does satisfy all the use cases defined here?
@alexsomesan could the
kubernetes
provider include a second resource (lets just call itkubernetes_manifest_functional
) that does not make all the plan promise guarantees you describe but does satisfy all the use cases defined here?
But this defeats the point as you'd basically never be able to use the non functional version.
IMO most of the "errors" should be folded into manifest not existing:
- No connection to cluster
- No CRD
- No resource instance
Edit: I of course agree that this is best solved inside Terraform itself, but if wishes were fishes.
Really, after years we still have this issue?
Why this wouldn't work?
May have been said already but another potential way of solving this would be to:
Allow a resource to dictate that it
provides
X CRD.Then have terraform just trust the
provides
tag until apply phase.If during apply, after applying the provider, the resource does not exist, then fail.
Does this make sense?
So meanwhile the only way to resolve the issue is to comment out the kubernetes_manifest
that rely on CRDs, and then apply, and then uncomment the manifest and reapply? Not sure Iβm understanding this correctly
Not so far off from the 3 year mark when I opened this issue. Even back then we were opposed to deploying k8s resources via terraform but deploying flux with it to help bootstrap the cluster seemed like a good idea, this was before flux offered a provider to do the bootstrapping, we migrated to that once it was available IIRC.
I suppose the lesson here is don't manage your k8s resources in TF unless you absolutely have to as, and this is probably not a surprise to most folks, how k8s manages its resources isn't very compatible with TF.
Reflecting on the previous comment here, I propose that Terraform could benefit from supporting multiple plan + apply cycles. Specifically, if Terraform detects that a Custom Resource (CR) cannot be planned because its CRD isnβt yet installed, it could defer the CR to a subsequent cycle, provided that it can still make progress in the current cycle.
During the apply phase, Terraform would only execute the actions from the completed plan and then indicate if additional cycles are needed.
This approach would be more efficient than the current workarounds, which involve separating resources into different sets of Terraform files, or temporarily commenting out parts of the code or using conditional logic to manage resource dependencies. Such enhancements would streamline the development process, allowing Terraform to handle resource ordering more intuitively and reducing the manual effort required to ensure resources are applied in the correct order.
Update:
- It looks like there is some active work in Terraform to this end: hashicorp/terraform#30937
The best workaround I've found is to package the custom resources YAMLs into a small Helm chart, bundle it with Terraform module code, and then install it using the helm_release resource:
resource "helm_release" "custom_resources" {
name = "custom_resources"
chart = "${path.module}/custom_resources"
depends_on = [
helm_release.crds
]
}
The best workaround I've found is to package the custom resources YAMLs into a small Helm chart, bundle it with Terraform module code, and then install it using the helm_release resource:
resource "helm_release" "custom_resources" { name = "custom_resources" chart = "${path.module}/custom_resources" depends_on = [ helm_release.crds ] }
I did exactly the same thing for the cert-manager clusterissuer.
cert-manager.tf
charts/cert-manager-clusterissuer
βββ Chart.yaml
βββ templates/
βββ cluster_issuer_prod.yaml
βββ cluster_issuer_staging.yaml
cert-manager.tf:
locals {
solvers_ingress_class_name = "ingress-nginx"
}
resource "kubernetes_namespace_v1" "cert_manager" {
metadata {
name = "cert-manager"
}
}
resource "helm_release" "cert_manager" {
name = kubernetes_namespace_v1.cert_manager.metadata.0.name
repository = "https://charts.jetstack.io"
chart = kubernetes_namespace_v1.cert_manager.metadata.0.name
version = var.cert_manager_helm_version
namespace = kubernetes_namespace_v1.cert_manager.metadata.0.name
max_history = 1
set {
name = "installCRDs"
value = "true"
}
}
resource "helm_release" "cert_manager_clusterissuer" {
name = "cert-manager-clusterissuer"
chart = "${path.module}/charts/cert-manager-clusterissuer"
max_history = 1
set {
name = "acme_email"
value = var.acme_email
}
set {
name = "solvers_ingress_class_name"
value = local.solvers_ingress_class_name
}
depends_on = [
helm_release.cert_manager
]
}
Chart.yaml:
apiVersion: v2
name: cert-manager-clusterissuer
version: 0.1.0
cluster_issuer_prod.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.acme_email }}
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
ingressClassName: {{ .Values.solvers_ingress_class_name }}
cluster_issuer_prod.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.acme_email }}
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
ingressClassName: {{ .Values.solvers_ingress_class_name }}