banzaicloud/terraform-provider-k8s

Statically defined credentials ignored

G-Goldstein opened this issue · 5 comments

The readme in this repo suggests we can define static credentials like this:

provider "k8s" {
  load_config_file = "false"

  host = "https://104.196.242.174"

  client_certificate     = "${file("~/.kube/client-cert.pem")}"
  client_key             = "${file("~/.kube/client-key.pem")}"
  cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}

I want to do that, using the same parameters as my kubernetes provider that is already up and working:

provider "kubernetes" {
    load_config_file = "false"

    host = module.cluster.host
    client_certificate = module.cluster.client_certificate
    client_key = module.cluster.client_key
    cluster_ca_certificate = module.cluster.cluster_ca_certificate
}

provider "k8s" {
    load_config_file = "false"

    host = module.cluster.host
    client_certificate = module.cluster.client_certificate
    client_key = module.cluster.client_key
    cluster_ca_certificate = module.cluster.cluster_ca_certificate
}

But it doesn't seem to like these settings; on terraform plan I get this error:

Error: Failed to configure: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

  on main.tf line 34, in provider "k8s":
  34: provider "k8s" {

I'm not running Terraform in a cluster. I expect the k8s provider to be using the passed credentials and not trying to load in-cluster configuration.

I'm using banzaicloud/k8s v0.8.3 and Terraform v0.13.0

I've found that the above is successful as long as the cluster exists at apply time, which is different to the behaviour of the kubernetes provider. Is this expected? Could I request a feature change here?

It looks like this is fixed by #64 (released as v0.8.4).

Have you seen this working with v0.8.4 @jeroenj?

Yes, colleagues of mine spun up a new cluster yesterday and also hit the problem described in this issue (which we were tracking already). They've then upgraded to v0.8.4 which seems to have resolved this issue.

I'll be spinning up another cluster with this later this week too. I'll also try to confirm it works as expected.

Thank you very much for sharing this!

Closing now with fixed by #64.