Loading a single table in MySQL is ridiculously slow

The clarity of all other tables in the database is fulfilled, as expected, and load ~ 2 million rows in a split second. One table in total ~ 600 rows takes 10+ minutes to load into navcat.

I can’t think of any possible reason. Only 4 columns. One of them is a large text field, but I used to work with large text fields, and they have never been so slow.

running explain select * from parser_queue I get

  id setect type table type possible keys key key len ref rows extra 1 SIMPLE parser_queue ALL - - - - 658 - 

The profile informs me that 453 seconds were spent on “sending data”, I also received this on the “Status” tab. I do not understand his majority, but these numbers are much higher than my other tables.

 Bytes_received 31 Bytes_sent 32265951 Com_select 1 Created_tmp_files 16 Handler_read_rnd_next 659 Key_read_requests 9018487 Key_reads 3928 Key_write_requests 310431 Key_writes 4290 Qcache_hits 135077 Qcache_inserts 14289 Qcache_lowmem_prunes 4133 Qcache_queries_in_cache 983 Questions 1 Select_scan 1 Table_locks_immediate 31514 

The data stored in the text field averages about 12,000 characters. There is a primary, automatic increment int id field, tinyint status field, text field and timestamp field with on update current timestamp .


OK. I will try both answers, but first I can answer the questions quickly:

The primary key in the identifier field is the only key. This table is used for queues, with ~ 50 entries added / deleted per hour, but I just created them yesterday. Can it get damaged in such a short time?

This is myisam


More work trying to isolate the problem:

repair table did nothing optimize table did nothing created a temporary table. queries were about 50% slower in the temp table.

I deleted the table and rebuilt it. SELECT * takes 18 seconds in just 4 rows.

Here is the SQL I used to create the table:

 CREATE TABLE IF NOT EXISTS `parser_queue` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `status` tinyint(4) NOT NULL DEFAULT '1', `data` text NOT NULL, `last_updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM; 

Stranger, everything seems perfect in my local box. Slowness only occurs on the dev site.

For clarity: there are more than 100 tables on the dev site, and this is the only thing that scares.


OK I disabled all cron jobs that use this table. SHOW PROCESSLIST does not detect any locks in the table.

Changing the engine to InnoDB did not lead to a significant improvement (86 seconds versus 94 for MyISAM)

any other ideas?

Running SHOW PROCESSLIST during a query shows that it spends most of its time writing to net

+4
source share
3 answers

If you suspect corruption somewhere, you can try either (or both):

 CREATE TABLE temp SELECT * FROM parser_queue; 

This will create a new table identical to the previous one, except that it will be recreated. Alternatively (or perhaps after you made a copy), you can try:

 REPAIR TABLE parser_queue; 

You can also try to optimize the table; it can be fragmented since you use it as a queue.

 OPTIMIZE TABLE parser_queue; 

You can determine if the table is fragmented by running SHOW TABLE STATUS LIKE 'Data_Free' and see if this causes a high number.

Update

You say you save gzcompress ed data in TEXT columns. Try replacing the TEXT column with a BLOB instead, which is designed to handle binary data such as compressed text.

+2
source

The name gives that you use a table for queues (many inserts and deletions, maybe?). Perhaps you had a table, but it is very fragmented. If my assumptions are correct, try OPTIMIZE TABLE parser_queue;

You can read more about this in the manual: http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html

+1
source

That's right, the problem seemed to be just that: text fields that are too large.

Performance

  SELECT id, status, last_updated FROM parser_queue 

takes less time than

  SELECT data FROM parser_queue WHERE id = 6 

Since all the queries that I will run return only one row, the slowdown will not affect me so much. I already use gzcompress for saved data, so I don’t think that in any case I could do much more.

+1
source

All Articles