PHP script is getting slower (file reader)

I have a script that when used against a timer is getting slower. This is pretty simple, because all he does is read a line, check it and then add it to the database, then move on to the next line.

Here his result is gradually deteriorating:

Record: #1,001 Memory: 1,355,360kb taking 1.84s
Record: #1,001 Memory: 1,355,360kb taking 1.84s
Record: #2,002 Memory: 1,355,192kb taking 2.12s
Record: #3,003 Memory: 1,355,192kb taking 2.39s
Record: #4,004 Memory: 1,355,192kb taking 2.65s
Record: #5,005 Memory: 1,355,200kb taking 2.94s
Record: #6,006 Memory: 1,355,376kb taking 3.28s
Record: #7,007 Memory: 1,355,176kb taking 3.56s
Record: #8,008 Memory: 1,355,408kb taking 3.81s
Record: #9,009 Memory: 1,355,464kb taking 4.07s
Record: #10,010 Memory: 1,355,392kb taking 4.32s
Record: #11,011 Memory: 1,355,352kb taking 4.63s
Record: #12,012 Memory: 1,355,376kb taking 4.90s
Record: #13,013 Memory: 1,355,200kb taking 5.14s
Record: #14,014 Memory: 1,355,184kb taking 5.43s
Record: #15,015 Memory: 1,355,344kb taking 5.72s

The file, unfortunately, is about ~ 20 GB, so I will probably be dead by the time everything is read at a speed of increase. The code is (mostly) below, but I suspect it has something to do with fgets (), but I'm not sure what.

    $handle = fopen ($import_file, 'r');

    while ($line = fgets ($handle))
    {
        $data = json_decode ($line);

        save_record ($data, $line);
    }

Thanks in advance!

EDIT:

Commenting 'save_record ($ data, $ line); seems to be doing nothing.

+5
source share
4 answers

. - , , :

$lines = exec("wc -l $filename");
for($i=1; $i <= $lines; $i++) {
   $line = exec('sed \''.$i.'!d\' '.$filename);

   // do what you want with the record here
}

, , , . , .

+1

, . , - , , , -, , , . , , . , , , . , . ( .) , , , , .

-, . , , 15000 .

0

, 96G. script, , 15 , 0,1%...

, stream_get_line, fgets exec sed. , , , -, .

!: -)

freebsd ( linux ) "split".

usage: split [-l line_count] [-a suffix_length] [file [prefix]]
       split -b byte_count[K|k|M|m|G|g] [-a suffix_length] [file [prefix]]
       split -n chunk_count [-a suffix_length] [file [prefix]]
       split -p pattern [-a suffix_length] [file [prefix]]

So I ran:

split -l 25000 -a 3 /data/var/myfile.log / data / var / myfile-log /

Then I ended up with 5608 files in the directory / data / var / myfile -log /, which can then be processed one by one using a command, for example:

php -f do-some-work.php / data / var / myfile-log / *
0
source

All Articles