If you just want to output input to output, this should do the trick.
use Carp (); { #Lexical For FileHandle and $/ open my $fh, '<' , '/path/to/file.txt' or Carp::croak("File Open Failed"); local $/ = undef; print scalar <$fh>; close $fh or Carp::carp("File Close Failed"); }
I think, in response to the question: "Does Perl have PHP ReadFile Equivelant", and I assume that my answer will be "But it really is not needed."
I used PHP File IO files to manage the files and it hurts, Perls is just so easy to use compared to the fact that the workaround for a function of the same size is suitable for everyone, which seems excessive.
Alternatively, you can watch the X-SendFile and basically send the header to your web server to tell it which file to send: http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/ ( assuming it has enough permissions to access the file, but the file is simply NOT accessible by the regular URI)
Edit Noted, it is better to do this in a loop, I checked the above code with a hard drive and it will implicitly try to save all this in an invisible temporary variable and eat your whole ram.
Alternative use blocks
The following improved code reads the given file in blocks of 8192 characters, which is much more efficient in terms of memory and provides bandwidth comparable to my disk read speed. (I also pointed to / dev / full for fits and giggles and got a useful bandwidth of 500 MB / s, and he didn’t eat all my rams, so that should be fine)
{ open my $fh , '<', '/dev/sda' ; local $/ = \8192; # this tells IO to use 8192 char chunks. print $_ while defined ( $_ = scalar <$fh> ); close $fh; }
Application of jrockways offers
{ open my $fh , '<', '/dev/sda5' ; print $_ while ( sysread $fh, $_ , 8192 ); close $fh; }
This literally doubles the performance ... and in some cases gives me better bandwidth than DD does O_o.