Prometheus discovery annotations not set on NATS (JetStream) deployment
JohanLindvall opened this issue · 6 comments
What version were you using?
Helm chart 1.1.5,
Values.yaml:
global:
image:
pullSecretNames:
- redacted
registry: redacted
nats:
config:
cluster:
enabled: true
replicas: 3
jetstream:
enabled: true
fileStore:
pvc:
size: 10Gi
websocket:
enabled: true
podTemplate:
topologySpreadConstraints:
kubernetes.io/hostname:
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
container:
env:
GOMEMLIMIT: 2500MiB
merge:
resources:
requests:
cpu: "1"
memory: 3Gi
limits:
memory: 3Gi
natsBox:
container:
image:
repository: nats-box
reloader:
image:
repository: nats-server-config-reloader
promExporter:
enabled: true
image:
repository: prometheus-nats-exporter
Prometheus doesn't discovery the promExporter container, because the pod doesn't have the appropriate labels. See https://github.com/nats-io/k8s/pull/77/files where it was added (but never copied over)
What environment was the server running in?
Kubernetes, see above
Is this defect reproducible?
Yes
Given the capability you are leveraging, describe your expectation?
I expect the metrics endpoint to be automatically discovered by Prometheus
Given the expectation, what is the defect you are observing?
The metrics endpoint isn't discovered.
Which operator still uses those annotations?
From kube-prometheus-stack:
The prometheus operator does not support annotation-based discovery of services, using the PodMonitor or ServiceMonitor CRD in its place
Sorry for the very slow reply. We are not using the Prometheus operator. We are using a plain old Prometheus deployment, configured according to https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml#L267
I recommend adding the labels you need for your Prometheus deployment via:
podTemplate:
merge:
metadata:
labels:
your-label: here
I ran into this today. Needed to add this to my values.yaml
.
promExporter:
enabled: true
podTemplate:
merge:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "7777"
Without the annotations, Prometheus doesn't automatically scrape the NATS pods. Would be better if the NATS Helm chart applied the annotations if promExporter
is enabled.
I faced the same issue. At least the documentation should state that in addition to
promExporter:
enabled: true
The K8s annotations should be set as well.