Perl, disable buffering input

There is a file:

:~$ cat fff qwerty asdf qwerty zxcvb 

There is a script:

 :~$ cat 1.pl #!/usr/bin/perl print <STDIN> 

The command works as expected:

 :~$ cat fff | perl -e 'system("./1.pl")' qwerty asdf qwerty zxcvb 

But this command will not work as expected: the first <STDIN> reads all the data, not one line. How to disable buffering for <STDIN>?

 :~$ cat fff | perl -e '$_ = <STDIN>; system("./1.pl")' :~$ 
+6
source share
2 answers

There are two Perl processes: the first, which assigns $_ = <STDIN> and calls system , and the second, print <STDIN>

Although only the first line of the stream is read in $_ first process, behind the scenes Perl populated its buffer with data and left the stream empty

What is the purpose of this? The only way to remember what you are asking is to read the entire file in the array in the first process, and then delete the first line and send the rest to the pipe in the second script

All this seems unnecessary, and I'm sure there is a better method if you describe the main problem.

Update

Since you say you know about the buffering problem, the way to do this is to use sysread , which will read from the pipe at a lower level and avoid buffering

Something like this will work

 cat fff | perl -e 'while (sysread(STDIN, $c, 1)) {$_ .= $c; last if $c eq "\n"} system("./1.pl")' 

But I do not like to recommend it, because what you are doing seems very wrong, and I would like you to explain your real purpose.

+6
source

Recently, I had to parse several log files about 6 gigabytes in size. Buffering was a problem because Perl would happily try to read these 6 gigabytes in memory when I assign STDIN to an array ... However, I just didn't have the system resources available for this. I came up with the following workaround, which simply reads the file line by line and thus avoids the massive black hole buffering buffer, which otherwise could manage all my system resources.

Note. All these scripts are divided into 6 gigabyte files into several smaller ones (the size of which is determined by the number of lines that should be contained in each output file). An interesting bit is the while loop and the assignment of one line from the variable's log file. The loop will iterate over the entire file, looking at one line, doing something with it, and then repeating. Result, lack of massive buffering ... I kept the entire script intact to show a working example ...

 #!/usr/bin/perl -w BEGIN{$ENV{'POSIXLY_CORRECT'} = 1;} use v5.14; use Getopt::Long qw(:config no_ignore_case); my $input = ''; my $output = ''; my $lines = 0; GetOptions('i=s' => \$input, 'o=s' => \$output, 'l=i' => \$lines); open FI, '<', $input; my $count = 0; my $count_file = 1; while($count < $lines){ my $line = <FI>; #assign a single line of input to a variable last unless defined($line); open FO, '>>', "$output\_$count_file\.log"; print FO $line; $count++; if($count == $lines){ $count=0; $count_file++; } } print " done\n"; 

Script is invoked on the command line, for example:

(script name) -i (input file) -o (output file) -l (output file size (i.e. number of lines)

Even if this is not quite what you are looking for, I hope this gives you some ideas. :)

0
source

Source: https://habr.com/ru/post/925013/


All Articles