This article introduces the MySQL exporter, to help you query and scrape data to expose metrics from MySQL, a database and therefore critical resource, in order to avoid system downtime.
Since databases are such a critical resource that downtime can cause significant financial and reputation losses, monitoring is a must. The MySQL exporter is required to monitor and expose MySQL metrics. It queries MySQL, scraps the data, and exposes the metrics to a Kubernetes service endpoint that can further be scrapped by Prometheus to ingest the time series data. For monitoring MySQL, an external Prometheus exporter can be used, which is maintained by the Prometheus Community.
On deployment, this exporter scraps OPlog, replica set, server status, sharding, and storage engine metrics. It handles all metrics exposed by MySQL monitoring commands. It loops over all the fields exposed in diagnostic commands and tries to get data from them. The MySQL exporter helps users get crucial and continuous information about the database which is difficult to get from MySQL directly.
With the latest version of Prometheus (2.33 as of February 2022), these are the ways to set up a Prometheus exporter:
Supported by Prometheus since the beginning
To set up an exporter in native way a Prometheus config needs to be updated to add the target.
A sample configuration:
# scrape_config job
- job_name: mysql-staging
scrape_interval: 45s
scrape_timeout: 30s
metrics_path: "/metrics"
static_configs:
- targets:
- <Mysql exporter endpoint>
Code language: PHP (php)
This method is applicable for Kubernetes deployment only
With this, a default scrap config can be added to the prometheus.yaml file and an annotation can be added to the exporter service. With this, Prometheus will automatically start scrapping the data from the services with the mentioned path.
Prometheus.yaml
- job_name: kubernetes-services
scrape_interval: 15s
scrape_timeout: 10s
kubernetes_sd_configs:
- role: service
relabel_configs:
# Example relabel to scrape only endpoints that have
# prometheus.io/scrape: "true" annotation.
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
# prometheus.io/path: "/scrape/path" annotation.
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# prometheus.io/port: "80" annotation.
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
Code language: PHP (php)
Exporter service:
annotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
Code language: PHP (php)
Setting up a service monitor
The Prometheus operator supports an automated way of scraping data from the exporters by setting up a service monitor Kubernetes object. A sample service monitor for MySQL can be found here.
These are the necessary steps:
Step 1
Add/update Prometheus operator’s selectors. By default, the Prometheus operator comes with empty selectors which will select every service monitor available in the cluster for scrapping the data.
To check your Prometheus configuration:
Kubectl get prometheus -n <namespace> -o yaml
Code language: HTML, XML (xml)
A sample output will look like this.
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: kube-prometheus-stack
release: kps
scrapeInterval: 1m
scrapeTimeout: 10s
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: kps-kube-prometheus-stack-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: kps
Code language: CSS (css)
Here you can see that this Prometheus configuration is selecting all the service monitors with the label release = kps
So with this, if you are modifying the default Prometheus operator configuration for service monitor scrapping, make sure you use the right labels in your service monitor as well.
Step 2
Add a service monitor and make sure it has a matching label and namespace for the Prometheus service monitor selectors (serviceMonitorNamespaceSelector & serviceMonitorSelector).
To enable service monitor run:
helm install <RELEASE_NAME> prometheus-community/prometheus-mysql-exporter --set serviceMonitor.enabled=true
Code language: HTML, XML (xml)
Sample configuration:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
meta.helm.sh/release-name: mysql-exporter
meta.helm.sh/release-namespace: monitor
generation: 1
labels:
app: prometheus-mysql-exporter
app.kubernetes.io/managed-by: Helm
heritage: Helm
release: kps
name: mysql-exporter-prometheus-mysql-exporter
namespace: monitor
spec:
endpoints:
- interval: 15s
port: mysql-exporter
selector:
matchLabels:
app: prometheus-mysql-exporter
release: mysql-exporter
Here you can see we have a matching label on the service monitor release = kps that we are specifying in the Prometheus operator scrapping configuration.
With the Bitnami Helm charts, the MySQL exporter can be deployed as a sidecar container in the same pod.
To enable the side car:
helm install mysql bitnami/mysql --set metrics.enabled=true
Code language: JavaScript (javascript)
More details can be found here.
After enabling, sidecar Prometheus metrics are exported by the built-in container on the “/metrics” endpoint that can be scrapped by Prometheus. Once metrics are enabled, Helm will automatically add the annotation to the mysql svc.
Annotation:
annotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
Code language: PHP (php)
Now Prometheus will automatically start scraping the data if the svc discovery is enabled.
Prometheus configuration for svc discovery:
- job_name: "kubernetes-service-endpoints"
kubernetes_sd_configs:
- role: endpoints
Code language: JavaScript (javascript)
The following ones are handpicked metrics that will provide insights into MySQL.
Prometheus exporter for MySQL server metrics.
Supported versions:
NOTE: Not all collection methods are supported on MySQL/MariaDB < 5.6
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
NOTE: It is recommended to set a max connection limit for the user to avoid overloading the server with monitoring scrapes under heavy load. This is not supported on all MySQL/MariaDB versions; for example, MariaDB 10.1 (provided with Ubuntu 18.04) does not support this feature.
make
Running using an environment variable:
export DATA_SOURCE_NAME='user:password@(hostname:3306)/'
./mysqld_exporter <flags>
Running using ~/.my.cnf:
./mysqld_exporter <flags>
Example format for flags for version > 0.10.0:
--collect.auto_increment.columns
--no-collect.auto_increment.columns
Example format for flags for version <= 0.10.0:
-collect.auto_increment.columns
-collect.auto_increment.columns=[true|false]
Name | MySQL Version | Description |
---|---|---|
collect.auto_increment.columns | 5.1 | Collect auto_increment columns and max values from information_schema. |
collect.binlog_size | 5.1 | Collect the current size of all registered binlog files |
collect.engine_innodb_status | 5.1 | Collect from SHOW ENGINE INNODB STATUS. |
collect.engine_tokudb_status | 5.6 | Collect from SHOW ENGINE TOKUDB STATUS. |
collect.global_status | 5.1 | Collect from SHOW GLOBAL STATUS (Enabled by default) |
collect.global_variables | 5.1 | Collect from SHOW GLOBAL VARIABLES (Enabled by default) |
collect.info_schema.clientstats | 5.5 | If running with userstat=1, set to true to collect client statistics. |
collect.info_schema.innodb_metrics | 5.6 | Collect metrics from information_schema.innodb_metrics. |
collect.info_schema.innodb_tablespaces | 5.7 | Collect metrics from information_schema.innodb_sys_tablespaces. |
collect.info_schema.innodb_cmp | 5.5 | Collect InnoDB compressed tables metrics from information_schema.innodb_cmp. |
collect.info_schema.innodb_cmpmem | 5.5 | Collect InnoDB buffer pool compression metrics from information_schema.innodb_cmpmem. |
collect.info_schema.processlist | 5.1 | Collect thread state counts from information_schema.processlist. |
collect.info_schema.processlist.min_time | 5.1 | Minimum time a thread must be in each state to be counted. (default: 0) |
collect.info_schema.query_response_time | 5.5 | Collect query response time distribution if query_response_time_stats is ON. |
collect.info_schema.replica_host | 5.6 | Collect metrics from information_schema.replica_host_status. |
collect.info_schema.tables | 5.1 | Collect metrics from information_schema.tables. |
collect.info_schema.tables.databases | 5.1 | The list of databases to collect table stats for, or '* ' for all. |
collect.info_schema.tablestats | 5.1 | If running with userstat=1, set to true to collect table statistics. |
collect.info_schema.schemastats | 5.1 | If running with userstat=1, set to true to collect schema statistics |
collect.info_schema.userstats | 5.1 | If running with userstat=1, set to true to collect user statistics. |
collect.mysql.user | 5.5 | Collect data from mysql.user table |
collect.perf_schema.eventsstatements | 5.6 | Collect metrics from performance_schema.events_statements_summary_by_digest. |
collect.perf_schema.eventsstatements.digest_text_limit | 5.6 | Maximum length of the normalized statement text. (default: 120) |
collect.perf_schema.eventsstatements.limit | 5.6 | Limit the number of events statements digests by response time. (default: 250) |
collect.perf_schema.eventsstatements.timelimit | 5.6 | Limit how old the 'last_seen' events statements can be, in seconds. (default: 86400) |
collect.perf_schema.eventsstatementssum | 5.7 | Collect metrics from performance_schema.events_statements_summary_by_digest summed. |
collect.perf_schema.eventswaits | 5.5 | Collect metrics from performance_schema.events_waits_summary_global_by_event_name. |
collect.perf_schema.file_events | 5.6 | Collect metrics from performance_schema.file_summary_by_event_name. |
collect.perf_schema.file_instances | 5.5 | Collect metrics from performance_schema.file_summary_by_instance. |
collect.perf_schema.file_instances.remove_prefix | 5.5 | Remove path prefix in performance_schema.file_summary_by_instance. |
collect.perf_schema.indexiowaits | 5.6 | Collect metrics from performance_schema.table_io_waits_summary_by_index_usage. |
collect.perf_schema.memory_events | 5.7 | Collect metrics from performance_schema.memory_summary_global_by_event_name. |
collect.perf_schema.memory_events.remove_prefix | 5.7 | Remove instrument prefix in performance_schema.memory_summary_global_by_event_name. |
collect.perf_schema.tableiowaits | 5.6 | Collect metrics from performance_schema.table_io_waits_summary_by_table. |
collect.perf_schema.tablelocks | 5.6 | Collect metrics from performance_schema.table_lock_waits_summary_by_table. |
collect.perf_schema.replication_group_members | 5.7 | Collect metrics from performance_schema.replication_group_members. |
collect.perf_schema.replication_group_member_stats | 5.7 | Collect metrics from performance_schema.replication_group_member_stats. |
collect.perf_schema.replication_applier_status_by_worker | 5.7 | Collect metrics from performance_schema.replication_applier_status_by_worker. |
collect.slave_status | 5.1 | Collect from SHOW SLAVE STATUS (Enabled by default) |
collect.slave_hosts | 5.1 | Collect from SHOW SLAVE HOSTS |
collect.heartbeat | 5.1 | Collect from heartbeat. |
collect.heartbeat.database | 5.1 | Database from where to collect heartbeat data. (default: heartbeat) |
collect.heartbeat.table | 5.1 | Table from where to collect heartbeat data. (default: heartbeat) |
collect.heartbeat.utc | 5.1 | Use UTC for timestamps of the current server (pt-heartbeat is called with --utc ). (default: false) |
Name | Description |
---|---|
config.my-cnf | Path to .my.cnf file to read MySQL credentials from. (default: ~/.my.cnf ) |
log.level | Logging verbosity (default: info) |
exporter.lock_wait_timeout | Set a lock_wait_timeout (in seconds) on the connection to avoid long metadata locking. (default: 2) |
exporter.log_slow_filter | Add a log_slow_filter to avoid slow query logging of scrapes. NOTE: Not supported by Oracle MySQL. |
tls.insecure-skip-verify | Ignore tls verification errors. |
web.config.file | Path to a web configuration file |
web.listen-address | Address to listen on for web interface and telemetry. |
web.telemetry-path | Path under which to expose metrics. |
version | Print the version information. |
The MySQLd Exporter supports TLS and basic authentication.
To use TLS and/or basic authentication, you need to pass a configuration file
using the --web.config.file
parameter. The format of the file is described
in the exporter-toolkit repository.
The MySQL server's data source name
must be set via the DATA_SOURCE_NAME
environment variable.
The format of this variable is described at https://github.com/go-sql-driver/mysql#dsn-data-source-name.
If The MySQL server supports SSL, you may need to specify a CA truststore to verify the server's chain-of-trust. You may also need to specify a SSL keypair for the client side of the SSL connection. To configure the mysqld exporter to use a custom CA certificate, add the following to the mysql cnf file:
ssl-ca=/path/to/ca/file
To specify the client SSL keypair, add the following to the cnf.
ssl-key=/path/to/ssl/client/key
ssl-cert=/path/to/ssl/client/cert
Customizing the SSL configuration is only supported in the mysql cnf file and is not supported if you set the mysql server's data source name in the environment variable DATA_SOURCE_NAME.
You can deploy this exporter using the prom/mysqld-exporter Docker image.
For example:
docker network create my-mysql-network
docker pull prom/mysqld-exporter
docker run -d \
-p 9104:9104 \
--network my-mysql-network \
-e DATA_SOURCE_NAME="user:password@(hostname:3306)/" \
prom/mysqld-exporter
With collect.heartbeat
enabled, mysqld_exporter will scrape replication delay
measured by heartbeat mechanisms. Pt-heartbeat is the
reference heartbeat implementation supported.
The mysqld_exporter
will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.
For advanced use the mysqld_exporter
can be passed an optional list of collectors to filter metrics. The collect[]
parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.
params:
collect[]:
- foo
- bar
This can be useful for having different Prometheus servers collect specific metrics from targets.
There is a set of sample rules, alerts and dashboards available in the mysqld-mixin
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/Mysql
The MySQL exporter can be deployed in Kubernetes using the Helm chart. The Helm chart used for deployment is from the Prometheus community, which can be found here. To deploy this Helm chart, users can either follow the steps in the above link or refer to the ones outlined below:
$ helm repo add add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install [RELEASE_NAME] prometheus-community/prometheus-mysql-exporter
Some of the common parameters that must be changed in the values file include:
mysql.host: Defines the Mysql URL you want to monitor.
mysql.user: Mysql connection User.
mysql.pass: Mysql password.
Additional parameters can be changed based on individual needs - such as enabling and disabling collectors, parameters, etc. All these parameters can be tuned via the values.yaml file here.
mysql:
db: ""
host: "localhost"
param: ""
pass: "password"
port: 3306
protocol: ""
user: "exporter"
# secret with full DATA_SOURCE_NAME env var as stringdata
existingSecret: ""
# secret only containing the password
existingPasswordSecret:
name: ""
key: ""
In addition to the native way of setting up Prometheus monitoring, a service monitor can be deployed (if the Prometheus operator is being used) to scrap the data from MySQL, and Prometheus then scraps the data from the service monitor. With this approach, multiple MySQL can be scrapped without altering the Prometheus configuration. Every MySQL exporter comes with its own service monitor.
In the above-mentioned chart, a service monitor can be deployed by turning it on from the values.yaml file here.
serviceMonitor:
# enabled should be set to true to enable prometheus-operator discovery of this service
enabled: false
# interval is the interval at which metrics should be scraped
# interval: 30s
# scrapeTimeout is the timeout after which the scrape is ended
# scrapeTimeout: 10s
# additionalLabels is the set of additional labels to add to the ServiceMonitor
# namespace: monitoring
additionalLabels: {}
jobLabel: ""
targetLabels: []
podTargetLabels: []
metricRelabelings: []
Another way of scraping metrics while having the service discovery enabled in Prometheus is by updating the annotation section here with the following:
annotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
This concludes our discussion of the MySQL exporter! If you have any questions, you can reach our team via support@nexclipper.io and stay tuned for further exporter reviews and tips coming soon.
After digging into all the valuable metrics, this section explains in detail how we can get critical alerts.
PromQL is a query language for the Prometheus monitoring system. It is designed for building powerful yet simple queries for graphs, alerts, or derived time series (aka recording rules). PromQL is designed from scratch and has zero common grounds with other query languages used in time series databases, such as SQL in TimescaleDB, InfluxQL, or Flux. More details can be found here.
Prometheus comes with a built-in Alert Manager that is responsible for sending alerts (could be email, Slack, or any other supported channel) when any of the trigger conditions is met. Alerting rules allow users to define alerts based on Prometheus query expressions. They are defined based on the available metrics scraped by the exporter. Click here for a good source for community-defined alerts.
A general alert looks as follows:
- alert:(Alert Name)
expr: (Metric exported from exporter) >/</==/<=/=> (Value)
for: (wait for a certain duration between first encountering a new expression output vector element and counting an alert as firing for this element)
labels: (allows specifying a set of additional labels to be attached to the alert)
annotation: (specifies a set of informational labels that can be used to store longer additional information)
Some of the recommended MySQL alerts are:
- alert: MysqlDown
expr: mysql_up == 0
for: 0m
labels:
severity: critical
annotations:
summary: MySQL down (instance {{ $labels.instance }})
description: "MySQL instance is down on {{ $labels.instance }}\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: MysqlTooManyConnections(>80%)
expr: avg by (instance) (rate(mysql_global_status_threads_connected[1m])) / avg by (instance) (mysql_global_variables_max_connections) * 100 > 80
for: 2m
labels:
severity: warning
annotations:
summary: MySQL too many connections (> 80%) (instance {{ $labels.instance }})
description: "More than 80% of MySQL connections are in use on {{ $labels.instance }}\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: MysqlSlowQueries
expr: increase(mysql_global_status_slow_queries[1m]) > 0
for: 2m
labels:
severity: warning
annotations:
summary: MySQL slow queries (instance {{ $labels.instance }})
description: "MySQL server mysql has some new slow query.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: BufferPoolHits
expr: ((rate(mysql_global_status_innodb_buffer_pool_reads[5m])) / rate(mysql_global_status_innodb_buffer_pool_read_requests[5m]) *100 > 2
for: 0m
labels:
severity: critical
annotations:
summary: High Read requests rate hitting disk (instance {{ $labels.instance }})
description: "High number of logical reads that InnoDB could not satisfy from the buffer pool, and had to read directly from disk\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: MysqlInnodbLogWaits
expr: rate(mysql_global_status_innodb_log_waits[15m]) > 10
for: 0m
labels:
severity: warning
annotations:
summary: MySQL InnoDB log waits (instance {{ $labels.instance }})
description: "MySQL innodb log writes stalling\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
Graphs are easier to understand and more user-friendly than a row of numbers. For this purpose, users can plot their time series data in visualized format using Grafana.
Grafana is an open-source dashboarding tool used for visualizing metrics with the help of customizable and illustrative charts and graphs. It connects very well with Prometheus and makes monitoring easy and informative. Dashboards in Grafana are made up of panels, with each panel running a PromQL query to fetch metrics from Prometheus.
Grafana supports community-driven graphs for most of the widely used software, which can be directly imported to the Grafana Community.
NexClipper uses the MySQL database by the nasskach dashboard, which is widely accepted and has a lot of useful panels.
What is a Panel?
Panels are the most basic component of a dashboard and can display information in various ways, such as gauge, text, bar chart, graph, and so on. They provide information in a very interactive way. Users can view every panel separately and check the value of metrics within a specific time range.
The values on the panel are queried using PromQL, which is Prometheus Query Language. PromQL is a simple query language used to query metrics within Prometheus. It enables users to query data, aggregate and apply arithmetic functions to the metrics, and then further visualize them on panels.
Here are some examples of panels:
1. Database overview, connection, table locks
2. Aborted connection, network, memory
3. Command & handlers
Test
test