Setup monitoring with Prometheus and Grafana.
- Run sample server:
npm install
andnode server
; then perform requests to the application - especially to path /checkout - Run Prometheus: see below
- Visit your running Prometheus and run queries
- Run Grafana: see below
- Add Prometheus data source (Url:
http://localhost:9090
, Access:direct
) - Import
grafana-dashboard.json
dashboard - Create your own dashboard from the Prometheus queries
- Docker
Modify: /prometheus-data/prometheus.yml
, replace 192.168.0.10
with your own host machine's IP.
Host machine IP address: ifconfig | grep 'inet 192'| awk '{ print $2}'
docker run -p 9090:9090 -v "$(pwd)/prometheus-data":/prometheus-data prom/prometheus -config.file=/prometheus-data/prometheus.yml
Open Prometheus: http://localhost:9090
Range[0,1]: number of 5xx requests / total number of requests
sum(increase(http_request_duration_ms_count{code=~"^5..$"}[1m])) / sum(increase(http_request_duration_ms_count[1m]))
sum(rate(http_request_duration_ms_count[1m])) by (service, route, method, code) * 60
Apdex score approximation:
100ms
target and 300ms
tolerated response time
(
sum(rate(http_request_duration_ms_bucket{le="100"}[1m])) by (service)
+
sum(rate(http_request_duration_ms_bucket{le="300"}[1m])) by (service)
) / 2 / sum(rate(http_request_duration_ms_count[1m])) by (service)
Note that we divide the sum of both buckets. The reason is that the histogram buckets are cumulative. The le="100" bucket is also contained in the le="300" bucket; dividing it by 2 corrects for that. - Prometheus docs
histogram_quantile(0.95, sum(rate(http_request_duration_ms_bucket[1m])) by (le, service, route, method))
histogram_quantile(0.5, sum(rate(http_request_duration_ms_bucket[1m])) by (le, service, route, method))
avg(rate(http_request_duration_ms_sum[1m]) / rate(http_request_duration_ms_count[1m])) by (service, route, method, code)
In Megabyte.
avg(nodejs_external_memory_bytes / 1024 / 1024) by (service)
Necessary when you modified prometheus-data.
curl -X POST http://localhost:9090/-/reload
avg(rate(http_request_duration_ms_sum[1m]) / rate(http_request_duration_ms_count[1m])) by (service, route, method, code)
States of active alerts: pending
, firing
docker run -i -p 3000:3000 grafana/grafana
Open Grafana: http://localhost:3000
Username: admin
Password: admin
Create a Grafana datasource with this settings:
- name: DS_PROMETHEUS
- type: prometheus
- url: http://localhost:9090
- access: direct
Or use this curl request:
curl 'http://admin:admin@localhost:3000/api/datasources' -H 'Content-Type: application/json;charset=UTF-8' -H 'Accept: application/json, text/plain, */*' --data-binary '{"name":"DS_PROMETHEUS","type":"prometheus","url":"http://localhost:9090","access":"direct","jsonData":{"keepCookies":[]},"secureJsonFields":{}}' --compressed
Grafana Dashboard to import: /grafana-dashboard.json
Or use this curl request:
curl 'http://admin:admin@localhost:3000/api/dashboards/import' -H 'Accept-Encoding: gzip, deflate' -H 'Content-Type: application/json;charset=UTF-8' -H 'Accept: application/json, text/plain, */*' --data-binary '%{copy and paste grafana-dashboard.json}' --compressed
This example is sponsored by Trace by RisingStack.