load_config_file = false not respected when empty context in kubernetes
jsbjair opened this issue ยท 9 comments
Terraform Version
Terraform v0.12.2
- provider.external v1.2.0
- provider.google v2.9.1
- provider.helm v0.10.0
- provider.kubernetes v1.7.0
Affected Resource(s)
- provider "kubernetes"
Terraform Configuration Files
data "google_client_config" "current" {}
provider "kubernetes" {
alias = "default"
version = "~> 1.7"
load_config_file = false
host = "${module.cluster.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
error
If kube-context is defined to an empty context terraform apply
is showing an error;
Error: Failed to load config (/${path_to_home}/.kube/config; default context): invalid configuration: no configuration has been provided
on line 0:
(source code not available)
Expected Behavior
should apply the configuration without trying to read .kube/config file;
Actual Behavior
Error: Failed to load config (/${path_to_home}/.kube/config; default context): invalid configuration: no configuration has been provided
Steps to Reproduce
Create an empty kube context
tkubectl config use-context a
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
-
a
Configure a Kubernetes provider with load_config_file = false
2. terraform apply
Should see an error:
Error: Failed to load config (${path_to_home}/.kube/config; default context): invalid configuration: no configuration has been provided
- If switch to an valid context, now
terraform apply
work;
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
a
-
minikube minikube minikube
Important Factoids
References
I Found that, if there is a kubernetes resource in the same level {root dir} as the provider, the resource should have entry provider = "kubernetes" defined, to use the configuration of the dependent cluster;
It will respect the load_config_file;
resource "kubernetes_service_account" "helm" {
provider = "kubernetes.default"
depends_on = [module.cluster]
metadata {
name = "${var.tiller_account}"
namespace = "kube-system"
}
automount_service_account_token = true
}
Does the cluster
module in this case actually create a gke cluster? If so you may be running in to the upstream progressive apply issue: hashicorp/terraform#4149
You cannot currently (reliably) chain together a provider's config with the output of a resource.
I have the same issue but with the resource "google_container_cluster". My configuration is almost the same than the one on the terraform docs and I have a GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to a service account.
If I launch a terraform plan
without having a kubectl context that actually works (the default cluster must be up and running, not only defined or not empty) I get a:
Error: Error refreshing state: 1 error occurred:
* provider.kubernetes: Failed to load config (/Users/srael/.kube/config; default context): invalid configuration: no configuration has been provided
Does it means that terraform is actually calling my default cluster for some reason?
Apologies for reviving an abandoned thread, but I had a similar setup and was able to find a solution.
In my case, while I was configuring the kubernetes provider correctly, I had forgotten to explicitly configure the helm provider I was using, which was also trying to configure/reach out to my Kubernetes cluster. The error message about the missing config file (and other problems I was facing) was actually from the helm provider. Try configuring your helm provider with the same set of parameters as you did your kubernetes provider.
Hope this helps someone else out.
You cannot currently (reliably) chain together a provider's config with the output of a resource.
Are there any sane workarounds?
Very curious how we should tackle this.. I have a scaleway provider that provides the host, token & ca_certificate for my kubernetes provider. But if I want to start a fresh one, I receive:
Error: Failed to initialize config: invalid configuration: no configuration has been provided
on modules/kubernetes/main.tf line 1, in provider "kubernetes":
1: provider "kubernetes" {
provider "kubernetes" {
load_config_file = false
cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
host = var.host
token = var.token
}
Hi, everyone! I'm doing some testing to determine if this issue still exists. The original issue might have been fixed in the kubernetes provider, since I was able to use load_config_file = false
today. I also was not able to use the reproducer to create an invalid kubeconfig, since kubectl no longer supports that behavior. It seems to check that the context is valid before letting you set it:
[dakini@dax issue_521]$ kubectl config use-context a
error: no context exists with the name: "a"
Here's the test I did to ensure the load_config_file = false
functionality is working:
First, to set up the test, I moved my kubeconfig directory. This will ensure it doesn't load the default config.
[dakini@dax issue_521]$ mv ~/.kube ~/.kube-backup
[dakini@dax issue_521]$ kubectl get pods
W0414 14:15:05.678666 221921 loader.go:223] Config not found: /home/dakini/.kube/config
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Then I copy/pasted the ca cert, client cert, and client key fields from my kubeconfig directly into my main.tf, since this is just a quick, local test. (I shortened those values for readability in the snippet below).
[dakini@dax issue_521]$ cat main.tf
provider "kubernetes" {
version = "1.11.1"
load_config_file = false
host = "https://127.0.0.1:32768"
client_certificate = base64decode("LS0tLS1CsaXZydtLQo=")
client_key = base64decode("LS0tLStLQo=")
cluster_ca_certificate = base64decode("LS0tDZ0Fwa2ppStLQo=")
}
That worked as expected and provisioned my resource. Next I looked at the comment above (#521 (comment)) that mentioned an issue using a token.
I used this as my terraform config (shortening the token and cert values again for readability):
[dakini@dax issue_521]$ cat main.tf
provider "kubernetes" {
version = "1.11.1"
load_config_file = false
host = "https://127.0.0.1:32770"
token = base64decode("ZXlKaGSjl3RXRsV2c=")
cluster_ca_certificate = base64decode("LS0tLS1CEUtLS0tLQo=")
}
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
[...]
That worked as expected too. So I think this must have been solved as part of another issue.
Let me know if anyone is still seeing any problems with this and we can can dig deeper. Thanks!
Did you try deleting and recreating the cluster?
(when this provider is reading details from another resource, which is the cluster)
In my experience, the provider details aren't updated in that case, which is what caused me to get this issue.
I'm going to lock this issue because it has been closed for 30 days โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error ๐ค ๐ , please reach out to my human friends ๐ hashibot-feedback@hashicorp.com. Thanks!