Image Tag Mismatch between k8s.gcr.io/guestbook:v3 and registry.k8s.io/guestbook:v3
highb opened this issue · 7 comments
When the image tag was switched over to the new registry.k8s.io
container registry, it appears that the actual tags didn't get copied over quite right. When I deploy the registry.k8s.io/guestbook:v3
image to my cluster, the front-end fails because it attempts to look up redis-slave
and gets no such host
when the manifests are configured to set up the service as redis-replica
. This leads me to believe that a new version of the go guestbook needs to be pushed and then the tags in the manifests need to be updated.
Related commit:
538d302
I checked the 2 images digest are identical.
➜ ~ docker pull k8s.gcr.io/guestbook:v3
v3: Pulling from guestbook
...
Digest: sha256:8f333d5b72677d216b4eb046d655aef7be9f1380e06ca1c63dfa9564034e7e26
Status: Downloaded newer image for k8s.gcr.io/guestbook:v3
k8s.gcr.io/guestbook:v3
➜ ~ registry.k8s.io/guestbook:v3
zsh: no such file or directory: registry.k8s.io/guestbook:v3
➜ ~ docker pull registry.k8s.io/guestbook:v3
v3: Pulling from guestbook
...
Digest: sha256:8f333d5b72677d216b4eb046d655aef7be9f1380e06ca1c63dfa9564034e7e26
Status: Downloaded newer image for registry.k8s.io/guestbook:v3
registry.k8s.io/guestbook:v3
/triage needs-information
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Closing the issue as there is no update from @highb.
Please feel free to reopen with necessary inputs if you are still facing this issue.
/close
@T-Lakshmi: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.