@Omesh has some great tips, and I think that was part of my last load of 4.5 GB and 5.0 GB of SQL from 36 million rows. I wanted to get one place where other additional tips helped.
Follow @Omesh MySQL configuration guidelines.
Then view the actual SQL file that you are trying to import. the tool we used to dump the data added these lines right before the INSERT :
/*!40000 ALTER TABLE `table_name` DISABLE KEYS */;
What I did was uncommented by this line. This disables the primary key check during insertion, which allows it to work faster. If you do not have this line, you can manually add it:
ALTER TABLE `table_name` DISABLE KEYS;
Use caution if the active database is actively used.
Immediately after the INSERT tool also added this line. Remove comments to enable primary keys again:
/*!40000 ALTER TABLE `table_name` ENABLE KEYS */;
Again, it should look like this:
ALTER TABLE `table_name` ENABLE KEYS;
Before running the insert statement, disable autocommit , unique_checks and foreign_key_checks with the query:
set autocommit=0; set unique_checks=0; set foreign_key_checks=0;
Now run your import:
source /path/to/your/file.sql
Once the import is completed successfully, you will want to commit and then enable unique_checks and foreign_key_checks :
commit; set unique_checks=1; set foreign_key_checks=1;
Some other import considerations you might consider are as follows:
if you are using a virtual machine, up the available resources (RAM, CPU core count) for the virtual machine until import is complete
If you are using a physical machine, perhaps borrow some RAM from another machine long enough to complete the import
if possible, use two different disks: 1 for reading (the source of the SQL file) and one for writing (where the database or VM lives)
Hope this helps someone else!
cmeza source share