Proxmox node metrics viewable via exporter target ip endpoint but not via prometheus
corticalstack opened this issue · 4 comments
Hi,
New to prometheus and the pve-exporter. Running the following stack in portainer:
version: '3'
volumes:
prometheus-data:
driver: local
grafana-data:
driver: local
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- /etc/prometheus:/config
- prometheus-data:/prometheus
restart: unless-stopped
command:
- "--config.file=/etc/prometheus/prometheus.yml"
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
restart: unless-stopped
pve-exporter:
image: prompve/prometheus-pve-exporter
container_name: pve-exporter
ports:
- "9221:9221"
restart: unless-stopped
volumes:
- /etc/prometheus/pve.yml:/etc/prometheus/pve.yml
With prometheus config:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
# external_labels:
# monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'pve'
scrape_interval: 5s
static_configs:
- targets:
- 192.168.1.3 # Proxmox VE node.
metrics_path: /pve
params:
module: [default]
cluster: 1
node: 1
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 192.168.1.12:9221 # PVE exporter.
And pve.yaml:
default:
user: prometheus@pam
token_name: "exporter"
token_value: "ff......."
verify_ssl: false
The prometheus/grafana/exporter containers up and running, status green. I can see metrics via exporter endpoint in browser with url:
http://192.168.1.12:9221/pve?target=192.168.1.3
For example:
# HELP pve_up Node/VM/CT-Status is online/running
# TYPE pve_up gauge
pve_up{id="cluster/legends"} 1.0
pve_up{id="node/danu"} 1.0
pve_up{id="node/dagda"} 1.0
pve_up{id="qemu/110"} 1.0
pve_up{id="qemu/120"} 1.0
pve_up{id="qemu/130"} 1.0
pve_up{id="qemu/500"} 1.0
pve_up{id="qemu/7000"} 0.0
pve_up{id="qemu/8000"} 0.0
# HELP pve_disk_size_bytes Size of storage device
# TYPE pve_disk_size_bytes gauge
pve_disk_size_bytes{id="qemu/110"} 5.36870912e+010
pve_disk_size_bytes{id="qemu/120"} 4.2412802048e+010
But directly in prometheus I don't see any endpoint for the pve node, only the prometheus endpoint:
http://192.168.1.12:9090/targets?search=
Grateful for any help, thank you
Just to add, I've tried using the container name:port pve-exporter:9221 in place of the ip :port address, in prometheus.yml, no success in seeing pve endpoint in prometheus.
Just to summarise, exporter seems to be successful in scraping pve data, but it's not being pulled in by prometheus.
If prometheus doesn't report the target, then it isn't running with the correct configuration. You can check that by clicking on the Configuration entry in the Status menu in prometheus web ui.
Closed, since this isn't an issue specific to prometheus-pve.
Yep, my bad, my bad, fixed. Am incorrectly mounted prometheus volume, so prometheus was just loading the default prometheus.yml file instead of my own from host.
Thanks again for both your reply and awesome exporter.