bloomberg/goldpinger

goldpinger pod CrashLoopBackOff

vishwavijayverma opened this issue ยท 14 comments

Hi,
I am trying to run goldpinger in minkube ,but unable to bring goldpinger pod. A pod is crashing. Please find below version of kubernet
rsion: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I build docker file and push to my registry .

Thanks
Vijay

Hi! Could you please provide the reason of CrashLoopBackOff?
$ kubectl describe pod your_pod_name -n namespace_of_the_pod
$ kubectl logs your_pod_name -n namespace_of_the_pod

Now pod is running with the image docker.io/gokulpch/goldpinger:1.0.2. But I am unable to get UI.

I am running below command
kubectl port-forward --namespace=default goldpinger-g4kpk 30080

Getting error
1222 18:23:00.012230 77265 portforward.go:331] an error occurred forwarding 30080 -> 30080: error forwarding port 30080 to pod d76b3c048b71de2dac395134c2edf19e581014747503277a3a475c40f5e5ffac, uid : exit status 1: 2018/12/22 12:53:00 socat[12960] E connect(5, AF=2 127.0.0.1:30080, 16): Connection refused
I am using cluster role binding

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:

  • kind: ServiceAccount
    name: default
    namespace: default

Vishwass-MacBook-Pro:turnero vishwa$ kubectl get pods
NAME READY STATUS RESTARTS AGE
goldpinger-g4kpk 1/1 Running 0 8m
guilded-platypus-mysql-844cc969f5-swbfw 1/1 Running 0 2d
hello-minikube-6c47c66d8-8kcv6 1/1 Running 2 231d
hello-node-658d8f6754-dfkl4 1/1 Running 2 231d
Vishwass-MacBook-Pro:turnero vishwa$ kubectl describe pod goldpinger-g4kpk
Name: goldpinger-g4kpk
Namespace: default
Node: minikube/192.168.64.2
Start Time: Sat, 22 Dec 2018 18:14:44 +0530
Labels: app=goldpinger
controller-revision-hash=1947430255
pod-template-generation=1
version=1.0.2
Annotations:
Status: Running
IP: 172.17.0.10
Controlled By: DaemonSet/goldpinger
Containers:
goldpinger:
Container ID: docker://f509eca3aaa019ce7dae0d2b9ad6d0f04b5d4fa940b324a958d32299caf83007
Image: docker.io/gokulpch/goldpinger:1.0.2
Image ID: docker-pullable://gokulpch/goldpinger@sha256:3f126f73d44687a026eaee322290d59aff6f4c749e0b061382b6adc0ea6e1b16
Port: 80/TCP
State: Running
Started: Sat, 22 Dec 2018 18:14:46 +0530
Ready: True
Restart Count: 0
Environment:
HOST: 0.0.0.0
PORT: 80
HOSTNAME: (v1:spec.nodeName)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bchnc (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-bchnc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bchnc
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message


Normal SuccessfulMountVolume 8m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-bchnc"
Normal Pulled 8m kubelet, minikube Container image "docker.io/gokulpch/goldpinger:1.0.2" already present on machine
Normal Created 8m kubelet, minikube Created container
Normal Started 8m kubelet, minikube Started container

Could you please try kubectl port-forward --namespace=default goldpinger-g4kpk 30080:80? Looks like goldpinger is listening port 80, but you are trying to forward ports 30080->30080.

Thanks, Freund
Yes, you were correct, I was doing wrong port forward. Now it's working

Thanks
Vijay

Hi ,
unable to see others pod by goldpinger.
I am running goldpinger locally on minikube, installation is working fine as expected, but a graph is not displaying other connected pods.

I also applied below configuration.


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:

  • kind: ServiceAccount
    name: default
    namespace: default

Hi!
Each vertex of the graph is a node, not a pod. How many nodes do you have in your cluster ($ kubectl get nodes)? Minikube (AFAIK) provides only one node by default, and it means that your graph contains only one vertex (this node).

UPDATED: I was wrong regarding the vertices of a graph. In general 1 vertex == 1 goldpinger pod.

Hi Kaduev,

I am not sure, you are correct I have only one node. But during a video, https://youtu.be/DSFxRz_0TU4. He scaled deployment, that brings more pod. Am I understand wrong???

Vishwass-MacBook-Pro:turnero vishwa$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 232d v1.10.0

Spoiler: I'm not the author of this package, just started using it few days ago ๐Ÿ˜„

Sorry, I was unclear. There will be as many vertices in a graph as many goldpinger pods you have up and running. I don't see any idea why to have more then 1 goldpinger pod per node (except testing in minikube) and that's why my comment above is wrong (I'll update it to not to mislead others).

Regarding your issue โ€“ how many running goldpinger pods do you have?

Hi Kaduev,

I am not sure, you are correct I have only one node. But during a video, https://youtu.be/DSFxRz_0TU4. He scaled deployment, that brings more pod. Am I understand wrong???

Vishwass-MacBook-Pro:turnero vishwa$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 232d v1.10.0

Hey, sorry if it wasn't clear in my demo - I initially deployed the DaemonSet but that produced a not-so-exciting graph: https://youtu.be/DSFxRz_0TU4?t=571

So in order to show a bit better what the UI looks like, I then used another deployment to produce more pods.

The deployment I used was literally just this:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: goldpinger-deploy
  namespace: default
spec:
  replicas: 0
  template:
  selector:
    matchLabels:
      app: goldpinger
      deployment: yup
  template:
    metadata:
      labels:
        app: goldpinger
        deployment: yup
    spec:
      containers:
        - name: goldpinger
          env:
            - name: HOST
              value: "0.0.0.0"
            - name: PORT
              value: "80"
            # injecting real hostname will make for easier to understand graphs/metrics
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          image: "docker.io/mynamespace-replaceme/goldpinger:1.0.0"
          ports:
            - containerPort: 80

@kaduev13 thanks so much for helping out others with the issues - you're awesome, that's a real open source community spirit ! ๐Ÿ‘

@seeker89 thank you and all of the authors for open-souring the project ๐Ÿ˜„ I was looking for something like this and even wanted to implement it myself, but then found goldpinger and happy with it ๐Ÿ‘

Hi Kaduev,

My friend, I am running only a single pod. @seeker89 my friend would you like to suggest how I can get a complete graph. I really excited about your project and I am sure I will bring a lot of advocacy. Please help me.

@vishwavijayverma

  1. goldpinger pods are trying to communicate with each other;
  2. if you have only one goldpinger pod, then it won't ping any other goldpinger pods (because there are no such pods);
  3. in order to build a nice graph (>1 vertices) you need >1 running goldpinger pods;
  4. there are a lot of different ways to create >1 goldpinger pods:
    • DaemonSet โ€“ workload controller that creates a pod on each node. It's very useful in production, because it allows you to run a goldpinger on each node and check connectivity between them. I guess, this is the preferred way to deploy goldpinger in a cluster. However, it's not your case, because you have only one node, and a DaemonSet controller will create only one pod for this node.
    • Deployment โ€“ workload controller that creates and manages replicasets (and as a result it allows you to easily scale pods/rollback deployments/etc). It's not very useful for goldpinger application in production, but you can use this controller to test goldpinger even if you have only one node. The example of deployment configuration was given by @seeker89 in this comment. You need to adjust it a bit (point to your docker registry, etc), deploy and then scale to some value >1.
    • Pod โ€“ a minimal deployable unit. For the testing purposes you can just create a lot of goldpinger pods using the same pod definition (but changing the name of pod).

My suggestion is to pick the Deployment controller for the one-node testing purposes, but then switch to the DaemonSet approach in production.

@kaduev13 thanks for your suggestion. I am able to see complete vertices Graph. But my question is my goldpinger pod not showing any vertices for existing running pod.
Vishwass-MacBook-Pro:goldpinger vishwa$ kubectl get pods
NAME READY STATUS RESTARTS AGE
goldpinger-cp24m 1/1 Running 0 14h
goldpinger-deploy-79df8f94df-5pk5z 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-99j7n 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-9rllc 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-ddq4f 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-dt6tn 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-mmqbh 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-qmjgd 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-s4s6z 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-v82g9 1/1 Running 0 8h
goldpinger-deploy-79df8f94df-zqttz 1/1 Running 0 14h
hello-minikube-6c47c66d8-8kcv6 1/1 Running 2 233d
hello-node-658d8f6754-dfkl4 1/1 Running 2 233d
ungaged-pike-redis-master-0 1/1 Running 0 2d
ungaged-pike-redis-slave-5bb8546f68-5bqnz 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-5gbdm 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-9zs9v 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-fx5k9 1/1 Running 0 2d
ungaged-pike-redis-slave-5bb8546f68-n5rz4 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-nhbzg 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-q2vhn 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-srcpv 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-tdqx5 1/1 Running 0 1d
ungaged-pike-redis-slave-5bb8546f68-xs52m 1/1 Running 0 1d

I won't see ungaged-pike-redis-slave pod in graph. Only goldpinger pod is showing. Even I applied RBAC rule as well as.

"Note, that you will also need to add an RBAC rule to allow Goldpinger to list other pods. If you're just playing around, you can consider a view-all default rule:"


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:

  • kind: ServiceAccount
    name: default
    namespace: default

I suppose that goldpinger was not designed to ping other non-goldpinger pods (check this). The purpose of this tool is to check connectivity between different nodes, so in general it does not make sense to ping non-goldpinger pods. Just deploy a goldpinger pod on each node and let them communicate between each other.

@seeker89 please correct me if I'm wrong

Anyway, everything is possible.
Spoiler: I would not recommend to follow the instruction below
According to documentation

Goldpinger works by asking Kubernetes for pods with particular labels (app=goldpinger). While you can deploy Goldpinger in a variety of ways, it works very nicely as a DaemonSet out of the box.

It means that you can add app=goldpinger label to any (even non-goldpinger) pod and other goldpinger pods will ping this marked pod. After that you need to implement /ping method by your own and make it compatible with /ping endpoint of goldpinger.