Convert massive MySQL dump file to CSV

I tried something like this

awk -F " " '{if($1=="INSERT"){print $5}}' input.sql | \
    sed -e "s/^(//g" -e "s/),(/\n/g" -e "s/['\"]//g" \
        -e "s/);$//g" -e "s/,/;/g" > output.txt

But I find it slow and unoptimized

The MySQL dump file is as follows

CREATE TABLE MyTable{
    data_1,
    data_2
};

INSERT INTO MyTAble VALUES ('data_1','data_2'),...,('data_1','data_2');
INSERT INTO MyTAble VALUES ('data_1','data_2'),...,('data_1','data_2');
...
INSERT INTO MyTAble VALUES ('data_1','data_2'),...,('data_1','data_2');

My goal is to get a file with the following result (and without "or" for the application fields):

data_1,data_2
data_1,data_2
...
data_1,data_2

Thanks in advance!

+4
source share
2 answers

You may try:

gawk '/^INSERT/ {
    match ($0,/[^(]*\(([^)]*)\)/,a)
    print a[1]
}' input.sql

* Update *

After reading the question again, maybe this is more than what you want:

/^INSERT/ {
    line=$0
    while (match (line,/[^(]*\(([^)]*)\)/,a)) {
        cur=a[1]
        sub(/^['"]/,"",cur)
        sub(/['"]$/,"",cur)
        print cur
        line=substr(line,RSTART+RLENGTH)
    }
}

* Update 2 *

Based on the last update of the question, a new version is presented here:

/^INSERT/ {
    line=$0
    while (match (line,/[^(]*\(([^)]*)\)/,a)) {
        line=substr(line,RSTART+RLENGTH)
        match(a[1],/'([^']*)','([^']*)'/,b)
        print b[1]
        print b[2]
    }
}
+1
source
sed -n "/.*INSERT INTO MyTAble VALUES (\([^)]*\)).*/ {
   s/.*INSERT INTO MyTAble VALUES \(.*\);/\1/
   s/(\([^)]*\)),*/\\1\\
/g
   s/'//g
   s/\\n$//
   p
   }" input.sql > output.sql

based on your (new) sample

+1
source

All Articles