Using disk space makes me think that you have a very large set of query results, sorting in temporary tables on disk. To check, look at the increase in the Created_tmp_disk_tables counter state variable when spikes occur.
mysql> show global status like 'Created%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 56 | <-- this is probably the culprit | Created_tmp_files | 23 | | Created_tmp_tables | 3177 | +-------------------------+-------+
If so, you may have queries that cause the temporary tables to be large enough so that they cannot fit in memory and need to be confused on disk. Unfortunately, you cannot find out how large these temporary result sets are, but I would suggest that these are around 15 GiB.
You have to figure out which queries generate huge temporary tables and try to optimize these queries. Unfortunately, the MySQL database does not have good registration information to track this, and Amazon RDS does not allow you to replace the MySQL stock with an extended version of MySQL, for example. Percona Server , which will provide you this information in the slow query log.
So, you need to go into the development environment and do some code verification of your SQL queries, run them through EXPLAIN one one at a time, and determine which one is the bottleneck.
source share