/promster

⏰A Prometheus exporter for Hapi/Express servers to automatically measure request timings 📊

Primary LanguageJavaScriptMIT LicenseMIT

Logo

⏰ Promster - Measure metrics from Hapi/Express servers with Prometheus 🚦

Promster is an Prometheus Exporter for Node.js servers written with Express or Hapi.

❤️ Hapi · Express · Prettier · Jest · ESLint · Lerna · Prometheus 🙏

CircleCI Status Codecov Coverage Status Known Vulnerabilities Made with Coffee

❯ Package Status

Package Version Dependencies Downloads
promster/hapi hapi Version hapi Dependencies Status hapi Downloads
promster/express express Version express Dependencies Status express Downloads
promster/server server Version server Dependencies Status server Downloads
promster/metrics metrics Version metrics Dependencies Status metrics Downloads

❯ Why another Prometheus exporter for Express and Hapi?

These packages are a combination of observations and experiences I have had with other exporters which I tried to fix.

  1. 🏎 Use process.hrtime() for high-resolution real time in metrics in milliseconds (converting from nanoseconds)
    • process.hrtime() calls libuv's uv_hrtime, without system call like new Date
  2. ⚔️ Allow normalization of all pre-defined label values
  3. 🖥 Expose Garbage Collection among other metric of the Node.js process by default
  4. 🚨 Expose a built-in server to expose metrics quickly (on a different port) while also allowing users to integrate with existing servers
  5. 📊 Define two metrics one histogram for buckets and a summary for percentiles for performant graphs in e.g. Grafana
  6. 👩‍👩‍👧 One library to integrate with Hapi, Express and potentially more (managed as a mono repository)
  7. 🦄 Allow customization of labels while sorting them internally before reporting
  8. 🐼 Expose Prometheus client on Express locals or Hapi app to easily allow adding more app metrics
  9. ⏰ Allow multiple accuracies in seconds (default), milliseconds or both

❯ Installation

This is a mono repository maintained using lerna. It currently contains four packages in a metrics, a hapi or express integration, and a server exposing the metrics for you if you do not want to do that via your existing server.

Depending on the preferred integration use:

yarn add @promster/hapi or npm i @promster/express --save

or

yarn add @promster/hapi or npm i @promster/express --save

❯ Documentation

Promster has to be setup with your server. Either as an Express middleware of an Hapi plugin. You can expose the gathered metrics via a built-in small server or through our own.

The following metrics are exposed:

  • up: an indication if the server is started: either 0 or 1
  • nodejs_gc_runs_total: total garbage collections count
  • nodejs_gc_pause_seconds_total: time spent in garbage collection
  • nodejs_gc_reclaimed_bytes_total: number of bytes reclaimed by garbage collection
  • http_request_duration_percentiles_seconds: a Prometheus summary with request time percentiles in milliseconds (defaults to [ 0.5, 0.9, 0.99 ])
  • http_request_duration_buckets_seconds: a Prometheus histogram with request time buckets in milliseconds (defaults to [ 0.05, 0.1, 0.3, 0.5, 0.8, 1, 1.5, 2, 3, 5, 10 ])

In addition with each http request metric the following default labels are measured: method, status_code and path. You can configure more labels (see below). With all gargabe collection metrics a gc_type label with one of: unknown, scavenge, mark_sweep_compact, scavenge_and_mark_sweep_compact, incremental_marking, weak_phantom or all will be recored.

Given you pass { accuracies: ['ms'] } you would get millisecond based metrics instead.

  • http_request_duration_percentiles_milliseconds: a Prometheus summary with request time percentiles in milliseconds (defaults to [ 0.5, 0.9, 0.99 ])
  • http_request_duration_buckets_milliseconds: a Prometheus histogram with request time buckets in milliseconds (defaults to [ 50, 100, 300, 500, 800, 1000, 1500, 2000, 3000, 5000, 10000 ])

You can also opt out of either the Prometheus summary or histogram by passing in { metricTypes: ['summary'] } or { metricTypes: ['histogram'] }. In addition you may also pass { accuracies: ['ms', 's'] }. This can be useful if you need to migrate our dashboards from one accuracy to the other but can not affort to lose metric ingestion in the meantime. These two options should give fine enough control over what accuracy and metric types will be ingested in your Prometheus cluster.

@promster/express

const app = require('./your-express-app');
const { createMiddleware } = require('@promster/express');

// Note: This should be done BEFORE other routes
// Pass 'app' as middleware parameter to additionally expose Prometheus under 'app.locals'
app.use(createMiddleware({ app, options }));

Passing the app into the createMiddleware call attaches the internal prom-client to your Express app's locals. This may come in handy as later you can:

// Create an e.g. custom counter
const counter = new app.locals.Prometheus.Counter({
  name: 'metric_name',
  help: 'metric_help',
});

// to later increment it
counter.inc();

@promster/hapi

const { createPlugin } = require('@promster/hapi');
const app = require('./your-hapi-app');

app.register(createPlugin({ options }));

Here you do not have to pass in the app into the createPlugin call as the internal prom-client will be exposed onto Hapi as in:

// Create an e.g. custom counter
const counter = new app.Prometheus.Counter({
  name: 'metric_name',
  help: 'metric_help',
});

// to later increment it
counter.inc();

When creating either the Express middleware or Hapi plugin the followin options can be passed:

  • labels: an Array<String> of custom labels to be configured both on all metrics mentioned above
  • metricTypes: an Array<String> containing one of histogram, summary or both
  • metricNames: an object containing custom names for one or all metrics with keys of up, countOfGcs, durationOfGc, reclaimedInGc, percentilesInMilliseconds, bucketsInMilliseconds, percentilesInSeconds, bucketsInSeconds
  • accuracies: an Array<String> containing one of ms, s or both
  • getLabelValues: a function receiving req and res on reach request. It has to return an object with keys of the configured labels above and the respective values
  • normalizePath: a function called on each request to normalize the request's path
  • normalizeStatusCode: a function called on each request to normalize the respond's status code (e.g. to get 2xx, 5xx codes instead of detailed ones)
  • normalizeMethod: a function called on each request to normalize the request's method (to e.g. hide it fully)

@promster/server

In some cases you might want to expose the gathered metrics through an individual server. This is useful for instance to not have GET /metrics expose internal server and business metrics to the outside world. For this you can use @promster/server:

const { createServer } = require('@promster/server');

// NOTE: The port defaults to `7788`.
createServer({ port: 8888 }).then(server =>
  console.log(`@promster/server started on port 8888.`)
);

@promster/metrics

You can use the metrics package to expose the gathered metrics through your existing server. To do so just:

const app = require('./your-express-app');
const { getSummary, getContentType } = require('@promster/express');

app.use('/metrics', (req, res) => {
  req.statusCode = 200;

  res.setHeader('Content-Type', getContentType());
  res.end(getSummary());
});

This may slightly depend on the server you are using but should be roughly the same for all.

The @promster/metrics package has two other potentially useful exports in Prometheus (the actual client) and defaultRegister which is the default register of the client.