Monitoring MySQL Database Performance
MySQL is an open-source relational database system that began as a personal side project, similar to Linux. MySQL has grown in popularity over the last 25 years. Today, it is used by a wide range of firms, including Sony and Uber, to run their multi-petabyte mission-critical databases.
MySQL has been a victim of its success in several ways. It’s simple enough to operate that developers may take it for granted and presume it doesn’t require monitoring. In truth, MySQL is a sophisticated system with a delicate balance that needs to be carefully monitored. It reveals a wealth of essential indicators that you can’t afford to overlook, such as where bottlenecks are located, when it’s time to update, and which queries should be optimised.
Like most relational databases, MySQL uses databases (also known as schemas) and tables to organise data. On the other hand, MySQL offers a flexible architecture that allows you to select the optimal low-level storage engine for the job. In all of these levels, MySQL discloses its inner workings through many metrics. MySQL is also quite adaptable. Thanks to replication, you can run it as a single instance, as a primary-secondary cluster, or in multi-master mode. You may also establish horizontally scalable groups using third-party technologies like Vitess.
MySQL Metrics to Keep an Eye On
Let’s start by going through some of the most crucial indicators to watch in any MySQL server. Check out our MySQL integration docs (we’ll walk you through setting up below) to see all available metrics.
Uptime
Many organisations establish alarms for server downtime while forgetting to monitor the MySQL process itself, which may seem self-evident. You want to be notified right away if the database is down.
Connections
MySQL has a strict limit on the number of connections that can be open simultaneously. New relationships are banned until someone disconnects when you reach it.
By default, MySQL permits 151 connections. It’s easy to change the limit:
SET GLOBAL max_connections=
Remember that each connection requires at least 3MB, so try to restrict the number of links to a minimum.
Monitor three indicators to determine the optimal number of connections to set:
net.maxUsedConnections: Since the database’s inception, the most significant number of connections has been recorded. As a starting point, set this value max_connections
.
net.threadsConnected:
net.connectionErrorsMaxConnectionsPerSecond
: Because the limit was reached, the number of connections failed per second max_connections
. If the value is greater than zero, you should either increase the limit, create connection pools for your apps, or install a load balancer such as ProxySQL.
Network difficulties and defective clients might hamper MySQL connections. Keep net.abortedClientsPerSecond
an eye out for programs that aren’t correctly terminating connections. A high rate usually signals a network problem.
Network difficulties and defective clients might hamper MySQL connections.
Keep an eye out for programs that aren’t correctly terminating connections. A high rate usually signals a network problem.
memoryFreeByes
memoryUsedBytes
memoryTotalByes
Then, on the MySQL side, keep an eye on the following metrics:
db.innodb.bufferPoolPagesTotal: The total amount of data-holding pages in memory. To reduce disk activity and improve speed, you want this value to be as high as feasible.
db.innodb.bufferPoolReadsPerSecond: The number of pages must be recovered from the disk because they were not found in memory. The database requires extra memory if the value exceeds 5-15 percent of db.innodb.bufferPoolPagesTotal. In that scenario, increase the db.innodb buffer_pool_size setting on MySQL and verify the server RAM.
db.innodb.logWaitsPerSecond: If this value is consistently high, the log buffer has been set too low. If this is the case, increase innodb log buffer size until the issue is resolved.
Speed of storage
The most critical component for database performance, after memory, is disk I/O speed. Even if the system has adequate RAM to allocate the entire database, disk I/O still requires maintaining transaction consistency.
The primary database workload shapes disk activity. The following metrics should be smooth and steady for Online Transaction Processing (OLTP) systems (e.g., methods used for online purchase processing). Peaks in your graph suggest potential bottlenecks and delays for your users. On the other hand, inconsistent activity is far more typical and expected in Online Analytical Processing (OLAP) systems (e.g., methods used for budgeting and forecasting).
Monitor the following to keep an eye on workload:
The number of readings per second is specified by db.innodb.dataReadsPerSecond.
The number of writes per second is specified by db.innodb.dataWritesPerSecond.
The transaction log throughput is measured by db.innodb.osLogWrittenBytesPerSecond. This value is proportional to the rate of change in the data over time. This value should never be more than zero for OLTP workloads. If you notice that your queries are taking too long to complete, optimise them.
Metrics for query optimization
The most significant substantial impact on MySQL performance iquery optimisation.
Activate the MySQL slow query log to identify long-running queries:
SET GLOBAL slow_query_log = 1; SETGLOBAL slow_query_log_file = '/var/log/mysql/mysql-slow.log';
Then, on a first pass, set a reasonable query time to capture the worst-behaving queries:
// 3 seconds threshold SETGLOBAL long_query_time = 3;
You can reduce the query time as your performance increases and repeat the process.
Once the feature is turned on, keep an eye on it, query.slowQueriesPerSecond analyse the log, and use an EXPLAIN command to see which queries perform poorly. Depending on the situation, you may need to rewrite them, add or adjust indexes, or restructure your tables.
Here are a few more indicators to watch while optimising queries.
Files and tables that only uses for a short period
For operations like GROUP BY, ORDER BY, and UNION that doesn’t fit in-memory, MySQL produces temporary files and tables on disk. Keep an eye on the following metrics because they can lead to excessive disk I/O:
Tables measure with db.createdTmpDiskTablesPerSecond.
Temporary files measure using db.createdTmpFilesPerSecond.
While it isn’t always possible to lower these values to zero, modifying sort_buffer_size and join_buffer_size and rewriting queries can help reduce temporary disk activity.
Locks
To ensure data consistency, MySQL employs table and row-level locks. Long-running locks that block other clients can cause by poorly worded queries and other administration processes. The most critical indicators of lock action are:
For tables, use db.tablesLocksWaitedPerSecond, and for rows, use db.innodb.rowLockTimeAvg and db.innodb.rowLockWaitsPerSecond.
Indexes use by MySQL to filter, sort and join tables. MySQL must scan the entire table when queries and table structures do not match. This condition consumes a lot of additional memory and results in many disk I/O.
Keep an eye on the following metrics for more information:
If your tables require more indexes, use db.selectFullJoinPerSecond and db.selectFullJoinRangePerSecond.
db.innodb.bufferPoolReadAheadRndPerSecond aids in the detection of wasteful reads at the table level.
You can keep an eye on MySQL.
For the integration to work, MySQL version 5.6 or higher requires.
In the example below, we’ll teach you how to set up Infrastructure Monitoring for an Ubuntu server running MySQL. Check the agent documentation for alternative instructions if you’re using a different host OS.
Note that MySQL can also monitor as a Kubernetes or ECS service.
From integration to observability, there’s a lot to consider.
If data is the lifeblood of a program, the database is its beating heart. In any organisation, dependable database performance is critical. You can have a better understanding of how your MySQL servers are performing by tracking a few important indicators.
The MySQL integration is free and open source. That means you can look at its source code and make changes, or you can fork it and build it yourself.
Make our MySQL integration a key component of your observability strategy.
For additional information, see our complete list of on-host integrations.
Sign up for 100GB of ingest per month and one Full-Stack Observability user license for free forever if you’re ready to take control of your databases.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Revolutionizing Healthcare IT: Leveraging Enteros, FinOps, and DevOps Tools for Superior Database Software Management
- 21 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Real Estate Operations with Enteros: Harnessing Azure Resource Groups and Advanced Database Software
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Revolutionizing Real Estate: Enhancing Database Performance and Cost Efficiency with Enteros and Cloud FinOps
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enteros in Education: Leveraging AIOps for Advanced Anomaly Management and Optimized Learning Environments
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…