MySQL benchmark on SSD: tools and strategies

I am currently switching my servers to running MyISAM on hard drives in InnoDB on SSDs.

As a test table, I have a table of 3,800,000 rows (16 GB) .

My server is configured:

  • Ubuntu 64 + Nginx + MySQL 5.5 + ...

I have two things in mind that I would experience a lot:

  • How will the transition from hard drives to SSDs on concurrency
  • how to switch from MyISAM to InnoDB to concurrency

I have questions regarding tools and strategies:

  • since I'm most interested in concurrency, what tools should I use to run the test? I played with Siege and I found it very easy to play with. But I think there should be many more powerful Linux software products that better suit my needs.
  • What do testing strategies look like? I understand that choosing a strategy can have a close relationship with the tool that I prefer to use. For example, when playing with Siege, I need to write a PHP script that performs some heavy MySQL operations, uploads them to the server, passes the URL script as the Siege parameter (which is installed on my local laptop), and let Siege simulate parallel traffic for me.
+4
source share
3 answers

General tests are fine, but only the real load will show you the difference between the software and hardware configuration. Perhaps try:

  • Dump database from production server
  • Capture all queries from the production server (use a slow query log for this, set long_query_time = 0)
  • Load the database into the test configuration and start the slow query log (use pt-log-player).
  • Again capture all queries from the test server with long_query_time = 0.
  • Analyze the results of a slow query log using pt-query-digest.

I refer here to tools from the Percona Toolkit for MySQL (although some tools may require Percona Server, I'm not sure).

+1
source

It is important to remember that when analyzing the performance of the MySQL storage in Linux, the cache is used. I was very interested to know about the same case. This is always funny when the user complains about a slow request. They call you and run again to find their 50-minute request, which ends in 30 seconds due to the request cache. Always run

mysql> reset query cache; 

in MySQL when trying to optimize queries. However, there is one more step when comparing SSDs with traditional spindles: disk cache. It is difficult to compare access times or IOps when the OS caches a disk in memory on its own. To clear the disk cache, run the following from the shell:

 $ sync && sysctl -w vm.drop_caches=3 

These commands are run before each of your test requests helps you realize the potential of your SSD compared to having 7k2 SATA slowpoke. Verify this by running the same query twice without clearing the cache and observing the request time. At the moment, it’s nice to try some queries with and without indexes, and also, if possible, some associations. Use PLUS EXPLAIN for each query to verify that the index is being used. Random read access between indexes and data files will reveal bottlenecks on slower disks. Make sure your my.cnf is consistent between your SSD performance and your drive. I tested some things on a simple desktop OCZ SSD and noticed that query performance is 10 times faster than my 7200rpm SATA drive. In an SSD-based transactional database, I would be careful when using OPTIMIZE TABLE, since frequent database densification in combination with TRIM SSDs can affect disk life. This is theoretical, though, and I have not yet seen evidence to support this.

Hope this helps! I can’t wait for the days when magnetic HD disks replace tape as backup media and are completely replaced by SSD on most hardware.

+2
source

The type and quality of SSDs is of great importance. Do not use the desktop SATA SSD for mysql if you have a busy server. You will not get the productivity boost you think will be.

There are some great articles here: http://www.mysqlperformanceblog.com/search/innodb+log+file+ssd/

+1
source

All Articles