No Docker image for version v0.5.27 has been published?
Opened this issue · 9 comments
What happened?
I set up a cluster using kops v1.29.0 that tried to use the following image for aws-iam-authenticator
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27
The pods were stuck unable to pull the image and the events when you describe the pods say:
Normal BackOff 52s (x2 over 79s) kubelet Back-off pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27"
Warning Failed 52s (x2 over 79s) kubelet Error: ImagePullBackOff
Normal Pulling 40s (x3 over 80s) kubelet Pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27"
Warning Failed 40s (x3 over 79s) kubelet Failed to pull image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": rpc error: code = NotFound desc = failed to pull and unpack image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": failed to resolve reference "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27: not found
Warning Failed 40s (x3 over 79s) kubelet Error: ErrImagePull
If I edited the DaemonSet and changed the image to the previous version of 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.21
then everything works fine.
I think perhaps the image publishing for that release failed?
The goreleaser
threw an error in your repo's pipeline when trying to publish
• publishing
• docker images
• pushing image=602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64
⨯ release failed after 5m19s error=docker images: failed to publish artifacts: failed to push 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64 after 0 tries: failed to push 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64: exit status 1: no basic auth credentials
The push refers to repository [602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator]
https://github.com/kubernetes-sigs/aws-iam-authenticator/actions/runs/8854199394/job/24316751369
What you expected to happen?
Be able to pull the image and use it for the current 0.5 release.
Anything else we need to know?
No response
Installation tooling
kOps
AWS IAM Authenticator server Version
v0.5.27
Client information
- OS/arch: `Ubuntu 22.04`
- kubernetes client & version: Client = `1.30.1` Server = `1.29.5`
- authenticator client & version: `v0.5.27`
Kubernetes API Version
v1.29.5
aws-iam-authenticator YAML manifest
No response
kube-apiserver YAML manifest
No response
aws-iam-authenticator logs
No response
@dims By any chance, do you know anyone that could look into the image promotion failure? Thanks!
can you try 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/authenticator:v0.5.27 instead?
@nnmin-aws The kOps project has a problem with this image missing, as the current release references it.
I would appreciate some feedback on how to move forward, because this missing image will break things for many of our users.
This is holding for quite some time, do you know if there is an ETA for the fix? using kOps as well and as @hakman suggested we are forced to find workarounds for the missing released image
apology for the inconvenience. we will have a new release today. please kindly note v0.5.x is only for k8s version <=1.23 and will stop release since 1.23 has end of life. please kindly pick up v0.6.x. thank you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten