Following guestbook-go setup - PANIC: dial tcp: lookup redis-slave on 172.21.0.10:53: no such host
davidhadas opened this issue · 6 comments
Following the instructions of guestbook-go setup, the go app failed since it tried to resolve "redis-slave"
See #437
The image uses https://github.com/kubernetes/kubernetes/blob/e8c167a115ec662726904265d17f75a6d79d78d8/examples/guestbook-go/_src/main.go#L79 resulting in an attempt to resolve "redis-slave"
This is reflected in the log (see below)
And results in the message "Waiting for database connection..." on the browser page
Details
Following the instructions, all pods are running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-2sfpn 1/1 Running 0 29m
guestbook-4l7cr 1/1 Running 0 29m
guestbook-czf7n 1/1 Running 0 29m
redis-master-wxt7w 1/1 Running 0 32m
redis-replica-7tvv5 1/1 Running 0 32m
redis-replica-c6qtb 1/1 Running 0 32m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
guestbook LoadBalancer 172.21.14.199 ********** 3000:30122/TCP 28m
redis-master ClusterIP 172.21.157.148 6379/TCP 32m
redis-replica ClusterIP 172.21.173.21 6379/TCP 32m
connecting via browser shows "Waiting for database connection..."
kubectl logs guestbook-2sfpn
[negroni] listening on :3000
[negroni] Started GET /
[negroni] Completed 200 OK in 891.696µs
[negroni] Started GET /style.css
[negroni] Completed 200 OK in 847.749µs
[negroni] Started GET /favicon.ico
[negroni] Completed 404 Not Found in 214.807µs
[negroni] Started GET /lrange/guestbook
[negroni] PANIC: dial tcp: lookup redis-slave on 172.21.0.10:53: no such host
goroutine 20 [running]:
github.com/codegangsta/negroni.(*Recovery).ServeHTTP.func1(0x7fe02a89aa68, 0xc82006cb80, 0xc8200dad20)
/go/src/github.com/codegangsta/negroni/recovery.go:34 +0xe9
panic(0x7a8c60, 0xc820094dc0)
/usr/local/go/src/runtime/panic.go:426 +0x4e9
main.HandleError(0x6e2780, 0xc8200dbe80, 0x7fe02a89af58, 0xc820094dc0, 0x0, 0x0)
/go/src/github.com/GoogleCloudPlatform/kubernetes/examples/guestbook-go/_src/main.go:71 +0x59
main.ListRangeHandler(0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/GoogleCloudPlatform/kubernetes/examples/guestbook-go/_src/main.go:38 +0x187
net/http.HandlerFunc.ServeHTTP(0x8a8278, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/usr/local/go/src/net/http/server.go:1618 +0x3a
github.com/gorilla/mux.(*Router).ServeHTTP(0xc820094460, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/gorilla/mux/mux.go:103 +0x270
github.com/codegangsta/negroni.Wrap.func1(0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540, 0xc8200db8a0)
/go/src/github.com/codegangsta/negroni/negroni.go:41 +0x50
github.com/codegangsta/negroni.HandlerFunc.ServeHTTP(0xc8200dadc0, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540, 0xc8200db8a0)
/go/src/github.com/codegangsta/negroni/negroni.go:24 +0x44
github.com/codegangsta/negroni.middleware.ServeHTTP(0x7fe02a899758, 0xc8200dadc0, 0xc8200dae40, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0xaa
github.com/codegangsta/negroni.(middleware).ServeHTTP-fm(0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0x53
github.com/codegangsta/negroni.(*Static).ServeHTTP(0xc820073770, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540, 0xc8200db780)
/go/src/github.com/codegangsta/negroni/static.go:49 +0x2e0
github.com/codegangsta/negroni.middleware.ServeHTTP(0x7fe02a899730, 0xc820073770, 0xc8200dae20, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0xaa
github.com/codegangsta/negroni.(middleware).ServeHTTP-fm(0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0x53
github.com/codegangsta/negroni.(*Logger).ServeHTTP(0xc820090078, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540, 0xc8200db740)
/go/src/github.com/codegangsta/negroni/logger.go:25 +0x1f4
github.com/codegangsta/negroni.middleware.ServeHTTP(0x7fe02a899708, 0xc820090078, 0xc8200dae00, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0xaa
github.com/codegangsta/negroni.(middleware).ServeHTTP-fm(0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0x53
github.com/codegangsta/negroni.(*Recovery).ServeHTTP(0xc8200dad20, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540, 0xc8200db720)
/go/src/github.com/codegangsta/negroni/recovery.go:45 +0x75
github.com/codegangsta/negroni.middleware.ServeHTTP(0x7fe02a8996e0, 0xc8200dad20, 0xc8200dade0, 0x7fe02a89aa68, 0xc82006cb80, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:33 +0xaa
github.com/codegangsta/negroni.(*Negroni).ServeHTTP(0xc8200737d0, 0x7fe02a89a9d0, 0xc820075e10, 0xc8200ea540)
/go/src/github.com/codegangsta/negroni/negroni.go:73 +0x122
net/http.serverHandler.ServeHTTP(0xc820096500, 0x7fe02a89a9d0, 0xc820075e10, 0xc8200ea540)
/usr/local/go/src/net/http/server.go:2081 +0x19e
net/http.(*conn).serve(0xc820096580)
/usr/local/go/src/net/http/server.go:1472 +0xf2e
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2137 +0x44e
I faced the same issue and it is because the build/push of the latest image to GCR isn't kicked somehow, I guess.
There is only one image tagged with v3
which is outdated. ref
Ran into this too. As a workaround, you can create a redis-slave service that points to the redis replicas.
{ "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"redis-slave", "labels":{ "app":"redis", "role":"replica" } }, "spec":{ "ports": [ { "port":6379, "targetPort":"redis-server" } ], "selector":{ "app":"redis", "role":"replica" } } }
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.