Prometheus exporter for MySQL server metrics.
Supported versions:
- MySQL >= 5.6.
- MariaDB >= 10.1
NOTE: Not all collection methods are supported on MySQL/MariaDB < 5.6
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
NOTE: It is recommended to set a max connection limit for the user to avoid overloading the server with monitoring scrapes under heavy load. This is not supported on all MySQL/MariaDB versions; for example, MariaDB 10.1 (provided with Ubuntu 18.04) does not support this feature.
make
Running using ~/.my.cnf:
./mysqld_exporter <flags>
This mode can be useful in monitoring MySQL deployments in cloud like RDS.
./mysqld_exporter --export-multi-hosts --config-multi-hosts=<path to ini config file>
Sample config file for multi exporter mode
[client]
user = foo
password = foo123
[client.server1]
user = bar
password = bar123
[client.server2]
user = bar1
password = bar1123
On the prometheus side you can set a scrape config as follows
- job_name: mysql # To get metrics about the mysql exporter’s targets
static_configs:
- targets:
# All rds hostnames to monitor. The target(s) here is also used to figure out the client name from the multi host config.
- server1:3306
- server2:3306
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
# The mysqld_exporter host:port
replacement: localhost:9104
Example format for flags for version > 0.10.0:
--collect.auto_increment.columns
--no-collect.auto_increment.columns
Example format for flags for version <= 0.10.0:
-collect.auto_increment.columns
-collect.auto_increment.columns=[true|false]
Name | MySQL Version | Description |
---|---|---|
collect.auto_increment.columns | 5.1 | Collect auto_increment columns and max values from information_schema. |
collect.binlog_size | 5.1 | Collect the current size of all registered binlog files |
collect.engine_innodb_status | 5.1 | Collect from SHOW ENGINE INNODB STATUS. |
collect.engine_tokudb_status | 5.6 | Collect from SHOW ENGINE TOKUDB STATUS. |
collect.global_status | 5.1 | Collect from SHOW GLOBAL STATUS (Enabled by default) |
collect.global_variables | 5.1 | Collect from SHOW GLOBAL VARIABLES (Enabled by default) |
collect.info_schema.clientstats | 5.5 | If running with userstat=1, set to true to collect client statistics. |
collect.info_schema.innodb_metrics | 5.6 | Collect metrics from information_schema.innodb_metrics. |
collect.info_schema.innodb_tablespaces | 5.7 | Collect metrics from information_schema.innodb_sys_tablespaces. |
collect.info_schema.innodb_cmp | 5.5 | Collect InnoDB compressed tables metrics from information_schema.innodb_cmp. |
collect.info_schema.innodb_cmpmem | 5.5 | Collect InnoDB buffer pool compression metrics from information_schema.innodb_cmpmem. |
collect.info_schema.processlist | 5.1 | Collect thread state counts from information_schema.processlist. |
collect.info_schema.processlist.min_time | 5.1 | Minimum time a thread must be in each state to be counted. (default: 0) |
collect.info_schema.query_response_time | 5.5 | Collect query response time distribution if query_response_time_stats is ON. |
collect.info_schema.replica_host | 5.6 | Collect metrics from information_schema.replica_host_status. |
collect.info_schema.tables | 5.1 | Collect metrics from information_schema.tables. |
collect.info_schema.tables.databases | 5.1 | The list of databases to collect table stats for, or '* ' for all. |
collect.info_schema.tablestats | 5.1 | If running with userstat=1, set to true to collect table statistics. |
collect.info_schema.schemastats | 5.1 | If running with userstat=1, set to true to collect schema statistics |
collect.info_schema.userstats | 5.1 | If running with userstat=1, set to true to collect user statistics. |
collect.perf_schema.eventsstatements | 5.6 | Collect metrics from performance_schema.events_statements_summary_by_digest. |
collect.perf_schema.eventsstatements.digest_text_limit | 5.6 | Maximum length of the normalized statement text. (default: 120) |
collect.perf_schema.eventsstatements.limit | 5.6 | Limit the number of events statements digests by response time. (default: 250) |
collect.perf_schema.eventsstatements.timelimit | 5.6 | Limit how old the 'last_seen' events statements can be, in seconds. (default: 86400) |
collect.perf_schema.eventsstatementssum | 5.7 | Collect metrics from performance_schema.events_statements_summary_by_digest summed. |
collect.perf_schema.eventswaits | 5.5 | Collect metrics from performance_schema.events_waits_summary_global_by_event_name. |
collect.perf_schema.file_events | 5.6 | Collect metrics from performance_schema.file_summary_by_event_name. |
collect.perf_schema.file_instances | 5.5 | Collect metrics from performance_schema.file_summary_by_instance. |
collect.perf_schema.indexiowaits | 5.6 | Collect metrics from performance_schema.table_io_waits_summary_by_index_usage. |
collect.perf_schema.tableiowaits | 5.6 | Collect metrics from performance_schema.table_io_waits_summary_by_table. |
collect.perf_schema.tablelocks | 5.6 | Collect metrics from performance_schema.table_lock_waits_summary_by_table. |
collect.perf_schema.replication_group_members | 5.7 | Collect metrics from performance_schema.replication_group_members. |
collect.perf_schema.replication_group_member_stats | 5.7 | Collect metrics from performance_schema.replication_group_member_stats. |
collect.perf_schema.replication_applier_status_by_worker | 5.7 | Collect metrics from performance_schema.replication_applier_status_by_worker. |
collect.slave_status | 5.1 | Collect from SHOW SLAVE STATUS (Enabled by default) |
collect.slave_hosts | 5.1 | Collect from SHOW SLAVE HOSTS |
collect.heartbeat | 5.1 | Collect from heartbeat. |
collect.heartbeat.database | 5.1 | Database from where to collect heartbeat data. (default: heartbeat) |
collect.heartbeat.table | 5.1 | Table from where to collect heartbeat data. (default: heartbeat) |
collect.heartbeat.utc | 5.1 | Use UTC for timestamps of the current server (pt-heartbeat is called with --utc ). (default: false) |
Name | Description |
---|---|
config.my-cnf | Path to .my.cnf file to read MySQL credentials from. (default: ~/.my.cnf ) |
log.level | Logging verbosity (default: info) |
exporter.lock_wait_timeout | Set a lock_wait_timeout on the connection to avoid long metadata locking. (default: 2 seconds) |
exporter.log_slow_filter | Add a log_slow_filter to avoid slow query logging of scrapes. NOTE: Not supported by Oracle MySQL. |
web.listen-address | Address to listen on for web interface and telemetry. |
web.telemetry-path | Path under which to expose metrics. |
version | Print the version information. |
export-multi-hosts | Enable multi exporter mode. Useful in monitoring MySQL deployments in cloud like RDS. |
config-multi-hosts | Path to the ini file used in multi exporter mode |
The MySQL server's data source name
must be set via the DATA_SOURCE_NAME
environment variable.
The format of this variable is described at https://github.com/go-sql-driver/mysql#dsn-data-source-name.
if The MySQL server supports SSL, you may need to specify a CA truststore to verify the server's chain-of-trust. You may also need to specify a SSL keypair for the client side of the SSL connection. To configure the mysqld exporter to use a custom CA certificate, add the following to the mysql cnf file:
ssl-ca=/path/to/ca/file
To specify the client SSL keypair, add the following to the cnf.
ssl-key=/path/to/ssl/client/key
ssl-cert=/path/to/ssl/client/cert
You can deploy this exporter using the prom/mysqld-exporter Docker image.
For example:
docker network create my-mysql-network
docker pull prom/mysqld-exporter
1. Single exporter mode
docker run -d \
-p 9104:9104 \
--network my-mysql-network \
prom/mysqld-exporter
--config.my-cnf=<path_to_cnf>
2. Multi exporter mode
docker run -d \
-p 9104:9104 \
--network my-mysql-network \
prom/mysqld-exporter
--export-multi-hosts
--config-multi-hosts==<path_to_multi_exporter_cnf>
With collect.heartbeat
enabled, mysqld_exporter will scrape replication delay
measured by heartbeat mechanisms. Pt-heartbeat is the
reference heartbeat implementation supported.
The mysqld_exporter
will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.
For advanced use the mysqld_exporter
can be passed an optional list of collectors to filter metrics. The collect[]
parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.
params:
collect[]:
- foo
- bar
This can be useful for having different Prometheus servers collect specific metrics from targets.
There are some sample rules available in example.rules