kubernetes/kops

Support dns=none with Terraform

Opened this issue · 1 comments

/kind feature

1. Describe IN DETAIL the feature/behavior/change you would like to see.

Kops' dns=none doesn't work with terraform (example prow failure). In the terraform plan output, base64 decoding the aws_launch_template's user_data reveals:

ClusterName: e2e-e2e-kops-scenario-terraform.test-cncf-aws.k8s.io
ConfigServer:
  servers:
  - https://kops-controller.internal.e2e-e2e-kops-scenario-terraform.test-cncf-aws.k8s.io:3988/

This DNS record doesn't exist in any DNS zone with dns=none, nor does nodeup know what it should resolve to in order to add an /etc/hosts entry.

The userdata should be using the ELB's DNS name like api-e2e-e2e-kops-aws-arm6-i3jlo6-e2dab7cbf5eb0e5a.elb.eu-central-1.amazonaws.com

2. Feel free to provide a design supporting your feature request.

Implementing this is a bit tricky because the userdata needs to change only for --target=terraform - it will need to conditionally include interpolation like:

ConfigServer:
  servers:
  - https://${aws_lb.api-e2e-e2e-kops-scenario-terraform-test-cncf-aws-k8s-io.dns_name}:3988/

We should also document this limitation (and even add update cluster --target terraform validation) until it is supported.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale