Goldpinger helm deployment issue
parabos2001-dev opened this issue · 14 comments
I have tried deploying goldpinger. Below is what shows in the logs when i deploy the helm chart for goldpinger.
2019/10/17 16:45:05 Metrics setup - see /metrics
2019/10/17 16:45:05 Goldpinger version: 1.0.2 build: Tue Dec 18 21:42:45 UTC 2018
2019/10/17 16:45:05 Kubeconfig not specified, trying to use in cluster config
2019/10/17 16:45:05 Added the static middleware
2019/10/17 16:45:05 Added the prometheus middleware
2019/10/17 16:45:05 All good, starting serving the API
2019/10/17 16:45:05 Serving goldpinger at http://[::]:80
2019/10/17 16:45:24 Shutting down...
2019/10/17 16:45:24 Stopped serving goldpinger at http://[::]:80
please suggest how i can get past the issue and get goldpinger working. Started looking into it yesterday.
thanks
Could you link to the source of the helm chart you used (in the version you used) ? Thanks.
https://github.com/helm/charts.git --- This is where I forked the helm charts and templates from.
version of goldpinger is bloomberg/goldpinger:2.0.0-vendor from dockerhub
let me know if that sounds correct.
Also we are on kubernetes version 1.10 and not the latest version.
What’s weird about the logs you posted, is that it seems to start OK and then shutdown some 20 seconds later, also cleanly. I’m wondering if the pod is not being evicted or something like that.
Could you kubectl describe the daemon set and he pod in question?
So here's our problem:
Liveness probe failed: HTTP probe failed with statuscode: 404
Looks like the helm chart has a misconfigurred liveness probe. It should be pointing to /healthz
Don't see any liveness probe yaml's in the helm chart templates directory.
How do it go ahead and get that corrected?
https://github.com/helm/charts/blob/b5c294a4e4864bc26e9c0303af4522e0f7cc382d/stable/goldpinger/templates/daemonset.yaml#L50-L57 the two probes are both pointing at the wrong url.
Changed as shown below..
livenessProbe:
httpGet:
path: /healthz
port: 80
readinessProbe:
httpGet:
path: /healthz
port: 80
Still failing on the probes...
attached kubctl describe pod
goldpinger describe pod.txt
Hmmm, this looks about right. Here is an example that we use for demos etc.
Perhaps it's just that it's not starting quickly enough, and you need to adjust the delays like in the link above. Would you like to give it a try?
I did try another version of goldpinger and that worked fine.
I will give the one above a try as well.
My understanding was that goldpinger will discover all the pods and display the connectivity of this in the UI, is that inaccurate because right now i see only the nodes and the goldpinger pods connectivity.
If you followed something like this setup, then you are going to have one goldpinger
per node, and see that on the UI.
The point is to verify that a pod on every node can communication with a pod on any other node, hence one pod per host via a daemonset.
ok, that helps. I did use 1.4.0 where it was succesful in deploying the daemonset without any issue.
Will look into getting the latest version running again. Thanks for help.