kubernetes-retired/kube-aws

Config template error on init

USA-RedDragon opened this issue · 7 comments

I seem to be having an issue during the init process.

Running the command (redacted for privacy):

kube-aws init \
  --cluster-name=kube-aws-test \
  --external-dns-name=kube.ourcompany.com \
  --hosted-zone-id=Z53861T2NU46E2 \
  --region=us-east-1 \
  --availability-zone=us-east-1a \
  --key-name=kubernetes \
  --kms-key-arn="arn:aws:kms:us-east-1:123456789012:key/a1111aa-11a1-1a11-111a-a1a11111aa1a" \
  --s3-uri=s3://kube-aws-test/

gives the output of:

Error: error exec-ing default config template: Error exec-ing default config template: template: cluster.yaml:1658:57: executing "cluster.yaml" at <.Config.AdminAPIEndpoint.DNSName>: can't evaluate field Config in type config.InitialConfig

in v0.15.0. Downgrading to v0.14.3 resolved the issue.

Also got this error.

Error: error exec-ing default config template: Error exec-ing default config template: template: cluster.yaml:1658:57: executing "cluster.yaml" at <.Config.AdminAPIEndpoint.DNSName>: can't evaluate field Config in type config.InitialConfig

Same error here. I know very little of this code structure/looked very fast so take what follows cautiously, but I'm under impression that in https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl at line 1658, ".Config.AdminAPIEndpoint.DNSName" should be replaced with ".ExternalDNSName"

I guess ".Config.AdminAPIEndpoint.DNSName" is valid (only) in plugins templates (e.g. https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/plugins/dashboard/plugin.yaml) where the templater has a full "Config" struct, while cluster.yaml.tmpl is handled with only the InitialConfig struct.

=> Actually, I checked that this fixed the issue/was test-passing (and got stuck with the PR, see #1824, sorry for the mess)

confirmed v0.14.3 fixes this issue

We have another urgent fix for the v0.15.x branch so we'll put both of these out together in a 0.15.1 release.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

This was fixed by @davidmccormick.