JMX http metrics endpoint for prometheus
Closed this issue · 4 comments
Hi,
I have patched the sidecar to export selected JMX metrics over http for prometheus to scrape but I dont see any metrics loading at http://localhost:5556/metrics
after port forwarding using kubectl port-forward -n kafka kafka-0 5556
. Neither can I see my promethues operator scraping the target.
There might be something I am not doing right. Can you please point me in the right direction or give some advice.
Best,
shashi
Have you looked at the logs of the metrics
container?
I don't find the metrics container running anywhere but ... I do have the kafka stateful set show the config including it after I patched.
Name: kafka
Namespace: kafka
CreationTimestamp: Mon, 18 Jun 2018 14:12:55 -0700
Selector: app=kafka
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta2","kind":"StatefulSet","metadata":{"annotations":{},"name":"kafka","namespace":"kafka"},"spec":{"replicas":3,"selector":{"ma...
Replicas: 3 desired | 3 total
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=kafka
Annotations: prometheus.io/port=5556
prometheus.io/scrape=true
Init Containers:
init-config:
Image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
Port: <none>
Host Port: <none>
Command:
/bin/bash
/etc/kafka-configmap/init.sh
Environment:
NODE_NAME: (v1:spec.nodeName)
POD_NAME: (v1:metadata.name)
POD_NAMESPACE: (v1:metadata.namespace)
Mounts:
/etc/kafka from config (rw)
/etc/kafka-configmap from configmap (rw)
Containers:
metrics:
Image: solsson/kafka-prometheus-jmx-exporter@sha256:a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
Port: 5556/TCP
Host Port: 0/TCP
Command:
java
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:MaxRAMFraction=1
-XshowSettings:vm
-jar
jmx_prometheus_httpserver.jar
5556
/etc/jmx-kafka/jmx-kafka-prometheus.yml
Limits:
memory: 120Mi
Requests:
cpu: 0
memory: 60Mi
Environment: <none>
Mounts:
/etc/jmx-kafka from jmx-config (rw)
broker:
Image: solsson/kafka:1.0.1@sha256:1a4689d49d6274ac59b9b740f51b0408e1c90a9b66d16ad114ee9f7193bab111
Ports: 9092/TCP, 9094/TCP, 5555/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
./bin/kafka-server-start.sh
/etc/kafka/server.properties
Requests:
cpu: 100m
memory: 512Mi
Readiness: tcp-socket :9092 delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KAFKA_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
JMX_PORT: 5555
Mounts:
/etc/kafka from config (rw)
/var/lib/kafka/data from data (rw)
Volumes:
jmx-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jmx-config
Optional: false
configmap:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: broker-config
Optional: false
config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
Volume Claims:
Name: data
StorageClass: kafka-broker
Labels: <none>
Annotations: <none>
Capacity: 300Gi
Access Modes: [ReadWriteOnce]
Events: <none>
Becase kafka is a StatefulSet with the default Update Strategy you'll have to manually delete pods for them to go up with the new template.