共计 7174 个字符,预计需要花费 18 分钟才能阅读完成。
如何进行 Prometheus-MySQL 监控,针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。
简介
mysqld_exporter 是用于获取 mysql 实例的指标服务。
安装
mysql 授权
CREATE USER exporter@localhost identified by mima GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO exporter @ localhost WITH MAX_USER_CONNECTIONS 3; flush privileges;
直接通过命令下载就可以了。
wget -t 1000 -c https://github.com/prometheus/mysqld_exporter/releases/download/v0.12.1/mysqld_exporter-0.12.1.linux-amd64.tar.gz tar zxvf mysqld_exporter-0.12.1.linux-amd64.tar.gz -C /opt/ cd /opt/mysqld_exporter-0.12.1.linux-amd64 export DATA_SOURCE_NAME= exporter:mima@(localhost:3306)/ ./mysqld_exporter
这里设置用户名密码是通过环境变量的形式,也可以用配置文件,默认找的配置文件是~/.my.cnf,也可以使用参数 –config.my-cnf= /opt/mysqld_exporter/.my.cnf 进行设置。
./mysqld_exporter --config.my-cnf= /opt/mysqld_exporter/.my.cnf
其他参数
usage: mysqld_exporter [flags] Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --exporter.lock_wait_timeout=2 Set a lock_wait_timeout on the connection to avoid long metadata locking. --exporter.log_slow_filter Add a log_slow_filter to avoid slow query logging of scrapes. NOTE: Not supported by Oracle MySQL. --collect.heartbeat.database= heartbeat Database from where to collect heartbeat data --collect.heartbeat.table= heartbeat Table from where to collect heartbeat data --collect.info_schema.processlist.min_time=0 Minimum time a thread must be in each state to be counted --collect.info_schema.processlist.processes_by_user Enable collecting the number of processes by user --collect.info_schema.processlist.processes_by_host Enable collecting the number of processes by host --collect.info_schema.tables.databases= * The list of databases to collect table stats for, or * for all --collect.mysql.user.privileges Enable collecting user privileges from mysql.user --collect.perf_schema.eventsstatements.limit=250 Limit the number of events statements digests by response time --collect.perf_schema.eventsstatements.timelimit=86400 Limit how old the last_seen events statements can be, in seconds --collect.perf_schema.eventsstatements.digest_text_limit=120 Maximum length of the normalized statement text --collect.perf_schema.file_instances.filter= .* RegEx file_name filter for performance_schema.file_summary_by_instance --collect.perf_schema.file_instances.remove_prefix= /var/lib/mysql/ Remove path prefix in performance_schema.file_summary_by_instance --web.listen-address= :9104 Address to listen on for web interface and telemetry. --web.telemetry-path= /metrics Path under which to expose metrics. --timeout-offset=0.25 Offset to subtract from timeout in seconds. --config.my-cnf= /home/jalright/.my.cnf Path to .my.cnf file to read MySQL credentials from. --collect.mysql.user Collect data from mysql.user --collect.info_schema.tables Collect metrics from information_schema.tables --collect.info_schema.innodb_tablespaces Collect metrics from information_schema.innodb_sys_tablespaces --collect.info_schema.innodb_metrics Collect metrics from information_schema.innodb_metrics --collect.global_status Collect from SHOW GLOBAL STATUS --collect.global_variables Collect from SHOW GLOBAL VARIABLES --collect.slave_status Collect from SHOW SLAVE STATUS --collect.info_schema.processlist Collect current thread state counts from the information_schema.processlist --collect.perf_schema.tablelocks Collect metrics from performance_schema.table_lock_waits_summary_by_table --collect.perf_schema.eventsstatements Collect metrics from performance_schema.events_statements_summary_by_digest --collect.perf_schema.eventsstatementssum Collect metrics of grand sums from performance_schema.events_statements_summary_by_digest --collect.perf_schema.eventswaits Collect metrics from performance_schema.events_waits_summary_global_by_event_name --collect.auto_increment.columns Collect auto_increment columns and max values from information_schema --collect.binlog_size Collect the current size of all registered binlog files --collect.perf_schema.tableiowaits Collect metrics from performance_schema.table_io_waits_summary_by_table --collect.perf_schema.indexiowaits Collect metrics from performance_schema.table_io_waits_summary_by_index_usage --collect.info_schema.userstats If running with userstat=1, set to true to collect user statistics --collect.info_schema.clientstats If running with userstat=1, set to true to collect client statistics --collect.info_schema.tablestats If running with userstat=1, set to true to collect table statistics --collect.info_schema.schemastats If running with userstat=1, set to true to collect schema statistics --collect.perf_schema.file_events Collect metrics from performance_schema.file_summary_by_event_name --collect.perf_schema.file_instances Collect metrics from performance_schema.file_summary_by_instance --collect.perf_schema.replication_group_member_stats Collect metrics from performance_schema.replication_group_member_stats --collect.perf_schema.replication_applier_status_by_worker Collect metrics from performance_schema.replication_applier_status_by_worker --collect.engine_innodb_status Collect from SHOW ENGINE INNODB STATUS --collect.heartbeat Collect from heartbeat --collect.slave_hosts Scrape information from SHOW SLAVE HOSTS --collect.info_schema.innodb_cmp Collect metrics from information_schema.innodb_cmp --collect.info_schema.innodb_cmpmem Collect metrics from information_schema.innodb_cmpmem --collect.info_schema.query_response_time Collect query response time distribution if query_response_time_stats is ON. --collect.engine_tokudb_status Collect from SHOW ENGINE TOKUDB STATUS --log.level= info Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal] --log.format= logger:stderr Set the log target and format. Example: logger:syslog?appname=bob local=7 or logger:stdout?json=true --version Show application version.
部署服务
直接注册成 systemd 服务。
```bash #!/bin/bash VERSION= 0.12.1 wget -t 100 -c https://github.com/prometheus/mysqld_exporter/releases/download/v${VERSION}/mysqld_exporter-${VERSION}.linux-amd64.tar.gz if [ ! -e mysqld_exporter-${VERSION}.linux-amd64.tar.gz ] then echo 安装包下载失败 exit 1 fi tar xvfz mysqld_exporter-${VERSION}.linux-amd64.tar.gz -C /opt/ cd /opt ln -s mysqld_exporter-${VERSION}.linux-amd64 mysqld_exporter cat /etc/systemd/system/mysqld_exporter.service EOF [Unit] Description=mysqld_exporter After=network.target [Service] Type=simple WorkingDirectory=/opt/mysqld_exporter ExecStart=/opt/mysqld_exporter/mysqld_exporter --config.my-cnf= /opt/mysqld_exporter/.my.cnf LimitNOFILE=65536 PrivateTmp=true RestartSec=2 StartLimitInterval=0 Restart=always [Install] WantedBy=multi-user.target EOF ``` systemctl daemon-reload systemctl enable mysqld_exporter systemctl start mysqld_exporter
配置 prometheus Job
默认端口是 9104
- job_name: mysqld_exporter # metrics_path defaults to /metrics # scheme defaults to http . static_configs: - targets: [localhost:9104]
指标展示
指标可以通过 prometheus 的 WebUI 进行查看
http://[promethe server ip]:9090
如果需要画图,可以直接使用 grafana,有人已经配置好了图形可以通过 grafana 官方下的 dashboard 找到 mysqld_exporter 相关的 dashboard 就可以直接使用了,配置好 prometheus 数据源,直接导入 grafana 就可以直接展示了。
关于如何进行 Prometheus-MySQL 监控问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注丸趣 TV 行业资讯频道了解更多相关知识。
正文完