Problems with mysql on large tables with insert

We have a dedicated server with 8 GB of RAM and PHP5.3 with MySQL 5.1

The maximum number of concurrent connections is about 500, and each connection performs 1-2 SELECT queries in smaller tables with user data, and then INSERT in a large transactions table. Selection requests did not take up much space, and we added monitoring between each request to see the response time of each request, and there were never any problems.

We have added tracking to our code, and sometimes this results in some simple INSERT queries taking 14-15 seconds. This query, listed below, sometimes takes 14 seconds, sometimes 6 seconds, sometimes 0.2 seconds or less. What could be the problem?

PHP code that sometimes returns these huge delays:

 $starT = microtime(true); echo '&timestampTS_02='.(microtime(true) - $startT); mysqli_query($GLOBALS['con'],"INSERT INTO `transactions` (`id`,`data`) VALUES('id','some_data')") or die(mysqli_error($GLOBALS['con'])); echo '&timestampTS_03='.(microtime(true) - $startT); 

There are about 2 million records in the transactions table.

 CREATE TABLE IF NOT EXISTS `transactions` ( `id` int(11) NOT NULL, `data` varchar(1000) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1; 
+5
source share
6 answers

Your table may be locked for another process. This may be a hardware problem (slow or long-term hard drive), software, or hypothetical (slow server in a shared system).

You can see what blocks insertion with show processlist

+4
source

InnoDB. (Because some of the below need it.)

Batch inserts. Either by INSERT ... VALUES (1,2,3), (4,5,6), ... or with LOAD DATA . (10x performance improvement when inserting 100 rows at a time). If you cannot "package", see the sentence of the "staging table" below.

innodb_flush_log_at_trx_commit = 2 (rather than default 1) (which is probably why your tests did not show good performance.)

innodb_buffer_pool_size = 70% of available RAM. (About 5000M in your 8GB machine).

Update from 5.1. (But this will not solve the problem on its own.)

If possible, reduce the number of indexes on transactions .

For a really fast meal, ping-pong are two “intermediate tables”, as described in High-Speed ​​Ingestion .

+5
source

In any database (regardless of the engine), the highest cost is always written to the data , since you add the overhead of a mandatory disk operation (with enough RAM, the MySQL server can cache tables and queries well enough to avoid high I / O for SELECT ). MyISAM (with table locking) probably doesn't help vs InnoDB (with row locking), but you don't have to switch storage mechanisms.

One of the main tips for several tabs is to use a trained operator . You set the request once, then change the parameters and execute them once per line. As pointed out in the MySQL Guide for Speed (highlighted by me)

The time required to insert a line is determined by the following factors, where the numbers indicate approximate proportions:
Connection: (3)
Sending a request to the server: (2)
Request Analysis: (2)
Insert row: (1 × row size)
Index insertion: (1 × number of indexes)
Closure: (1)

Since the prepared statements are sent and analyzed only once (in bold), this saves a lot of overhead and also ensures the safety of data attachments.

Another option is to enable parallel inserts . This is only an option if you are not deleting data from this table.

The latter may be impractical for you, but there must be something to consider if this is the main cause of pain for your business. Modern cloud computing has significantly reduced the cost of higher I / O (RAID, SSD, etc.). You might want to switch to a cloud service (AWS, Azure, etc.), which has a high I / O option. You can throw away as much RAM as you want in your database, but if your storage is slow, it will drag and drop the rest of your database.

+3
source

First check to see if the foreign key holder is listed in the transactions table.

Then run show triggers and check if any trigger will be triggered if something changes in transactions or in any of the reference tables.

Then configure all triggers found.

If this does not help, go back to the tracking that you added to your code and add the following: before starting the request, check what requests have already been executed at that moment, and write it down as well. You might find an already executed slow query that locks your table.

+1
source

Instead of MyISAM you should use the InnoDB you can configure mysql configuration through this site:
www.percona.com

Note: innodb-buffer-pool is very important in your configuration

0
source

Use mysqli for big data, it has several insert functions.

And preferably on dedicated servers.

See link - http://www.w3schools.com/php/php_mysql_insert_multiple.asp

Hi

0
source

Source: https://habr.com/ru/post/1215006/


All Articles