Import a large sql file into MySql via command line

I am trying to import a sql file about 300 MB in size in MySql via the command line in Ubuntu. I used

source /var/www/myfile.sql; 

Now it displays endlessly looking lines:

 Query OK, 1 row affected (0.03 sec) 

However, he was running a little now. I have not previously imported the file, so I just want to know if this is normal if the process is stalled or has some errors, will it be displayed on the command line or will this process continue indefinitely?

thank

+50
mysql ubuntu
Oct 20 '13 at 21:29
source share
4 answers

You can import the .sql file using standard input as follows:

mysql -u <user> -p<password> <dbname> < file.sql

Note: You cannot run between <-p> and <password>

Link: http://dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html

Note for proposed changes:. This answer has been slightly modified by the proposed changes to use the built-in password option. I can recommend it for scripts, but you should know that when you write the password directly in the parameter ( -p<password> ), it can be captured by the shell history that reveals your password to anyone who can read the history file. While -p asks for a password input by standard input.

+96
Oct 20 '13 at
source share

Guys regarding the time taken to import huge files, most importantly, it takes more time, because by default mysql is "autocommit = true", you should set this before importing your file, and then check how the import works like a gem ..

Open MySQL first:

mysql -u root -p

Then you need to do the following:

mysql>use your_db

mysql>SET autocommit=0 ; source the_sql_file.sql ; COMMIT ;

+42
Apr 04 '14 at 6:40
source share

+1 to @MartinNuc, you can start the mysql client in batch mode, and then you won’t see a long stream of OK lines.

The time taken to import this SQL file depends on many factors. Not only the file size, but also the type of instructions in it, how powerful the server server is and how many other functions work simultaneously.

@MartinNuc says that it can load 4 GB of SQL in 4-5 minutes, but I ran 0.5 GB of SQL files and it took 45 minutes on a smaller server.

We cannot guess how long it will take to run your SQL script on your server.




Repeat your comment

@MartinNuc is correct, you can choose the mysql client to print each statement. Or you can open a second session and run mysql> SHOW PROCESSLIST to find out what works. But you are probably more interested in the figure "interest rate" or an estimate of how long it will take to complete the remaining statements.

Sorry, there is no such function. The mysql client does not know how long it will take to run later statements, or even how many there are. Therefore, he cannot give a meaningful estimate of how long it will take to complete.

+8
Oct 20 '13 at
source share

The solution I use for large sql recovery is a mysqldumpsplitter script. I split sql.gz into separate tables. then download something like mysql workbench and treat it as restoring the required schema.

Here is the script https://github.com/kedarvj/mysqldumpsplitter

And this works for large sql restores, my average on one site I work with is a 2.5gb sql.gz file, 20GB uncompressed and ~ 100Gb after recovery completely

0
Jul 22 '16 at 15:25
source share