prometheus metric shows Node as 100% unhealthy always
surendarmsk1 opened this issue · 0 comments
Have Grafana dashbord configured with below query however on all times noticing 100% unhealthy status across most of the nodes. Not sure whether we are hitting the bug on default ping check of 300ms which is being timeout. Can someone assist on why this would occur? how to triage further?
Grafana Query:
sum(increase(goldpinger_nodes_health_total{cluster="$cluster",goldpinger_instance="$instance",status="unhealthy"}[15m])) by (goldpinger_instance) / (sum(increase(goldpinger_nodes_health_total{cluster="$cluster",goldpinger_instance="$instance",status="healthy"}[15m])) by (goldpinger_instance) + sum(increase(goldpinger_nodes_health_total{cluster="$cluster",goldpinger_instance="$instance",status="unhealthy"}[15m])) by (goldpinger_instance))
Repeated Warn Message on GoldPinger pods logs:
{"level":"warn","ts":1669303893.1442885,"caller":"goldpinger/pinger.go:151","msg":"Ping returned error","op":"pinger","name":"goldpinger","hostIP":"XX.XX.XX.XX","podIP":"XX.XX.XX.XX","responseTime":0.300629455,"error":"Get "http://XX.XX.XX.XX:8080/ping\": context deadline exceeded"}