There are several ways you could do this, but since you are currently using file_get_contents() , here is what I would do:
- Open the remote file with
fopen() - Read the file and save it with
fread() - this will allow you to track the current size and throw and erroneously when passing the threshold.
Something like that:
$url = 'http://www.spacetelescope.org/static/archives/images/large/heic0601a.jpg'; $file = '../temp/test.jpg'; $limit = 5 * 1024 * 1024; // 5MB if (!$rfp = fopen($url, 'r')) { // error, could not open remote file } if (!$lfp = fopen($file, 'w')) { // error, could not open local file } // Check the content-length for exceeding the limit foreach ($http_response_header as $header) { if (preg_match('/^\s*content-length\s*:\s*(\d+)\s*$/', $header, $matches)) { if ($matches[1] > $limit) { // error, file too large } } } $downloaded = 0; while (!feof($rfp) && $downloaded < $limit) { $chunk = fread($rfp, 8192); fwrite($lfp, $chunk); $downloaded += strlen($chunk); } if ($downloaded > $limit) { // error, file too large unlink($file); // delete local data } else { // success }
Note: it's tempting to check the Content-Length: header before you get any file to check if the file is too large - you can still do this if you want, but don't trust this value! The header is essentially an arbitrary value, and although it would be a protocol violation to use a content length that does not match the file size, it can be used to trick the system into downloading a file that violates the rules. You will need to count bytes even if you check this value.
You can also do this using curl via a data callback, but I find this a less satisfactory solution, mainly because curl requires a string function name and not a standard callable type, which means you will need to either use a global a variable, or a static variable, to track the loaded length of the content, none of which is acceptable (IMHO) for this.
source share