Using provider config context
Closed this issue · 4 comments
I'm not sure if this is already possible, but I'd like use the config context of clusters managed by the provider. My use-case is as follows:
- I'm creating a cluster using this provider,
terraform-provider-eksctl
. - I'm creating various resources in the cluster using the Kubernetes provider.
The README
mentions a computed output
but I'm unsure how to use it.
My current workaround simply entails using an explicit dependency via depends_on
for my Kubernetes resources. However, I can see this being a problem in a multi-context scenario.
It seems that eksctl utils write-kubeconfig
may be useful.
@dalberto Thanks for trying this proider! FYI, recent versions of the eksctl provider outputs kubeconfig_path
that can be used by e.g. helmfile resource.
To be clear, Helmfile resource works because it can configure which kubeconfig to use "per resource", not "per provider" like the kubernetes provider does. Probably the situation is the same for the helm provider, right?
For that reason, I was also considering to implement dedicated terraform provider/resource for triggering kubectl/helm operations, that supports setting kubeconfig per tf resource. Does that sound useful to you, too?
@mumoshu thanks for the information regarding kubeconfig_path
. I've tested the newest (v0.6.0
) and can confirm this is useful. However, it seems like I'd also need the config_context
in order to provide enough authentication information for the built-in provider.
I believe a provider/resource that allows per-resource operations would be useful, but for now I'd be satisfied with configuring the built-in provider and have those changes apply globally to all resources.
Relatedly, is it possible to get the generated cluster name? I noticed that the provider supplies a suffix and so the actual cluster name is different. Alternatively, is it possible to disable the suffix generation? I can see a workaround in which cluster auth/connection information can be derived from a data source, but the full cluster name would be required. I've also run into issues provisioning resources where the "real" cluster name does not match the prefix supplied (the alb-ingress-controller
for example). For now, I've resorted to deriving the "real" name by:
"${var.cluster_name_prefix}-${eksctl_cluster.cluster.id}"
similar to how it's done
Relatedly, is it possible to get the generated cluster name? I noticed that the provider supplies a suffix and so the actual cluster name is different.
Yes. If you use thew new eksctl_cluster
rather than old/renamed eksctl_cluster_deplyoment
the name
you specified is the exact cluster name.
With the exact cluster name, the aws_eks_cluster_auth data source can be used to configure the Kubernetes provider.
While somewhat inconvenient, according the the Kubernetes provider docs, it is a best practice to have two separate apply
commands: one for the cluster and one for the resources within the cluster.
For posterity I did the following:
data "aws_eks_cluster" "cluster_data" {
name = eksctl_cluster.cluster.name
}
data "aws_eks_cluster_auth" "cluster_auth" {
name = eksctl_cluster.cluster.name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster_data.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster_data.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster_auth.token
load_config_file = false
}
Subsequently,
terraform apply -target='<path>.eksctl_cluster.cluster'
terraform apply -target='<path>.k8s_resources'
- Creates a cluster.
- Creates resources unambiguously in that cluster.
Thanks again for your help and your work on this provider.