Why does the $ _SERVER global array accept 13x memory?

When creating a new array (and element) using simple PHP arrays, the following code uses 360 bytes in PHP 5.3 with and without APC. Even adding an item to $ _GET uses only 304 bytes. However, when creating an additional element in $ _SERVER, the same code uses 4,896 bytes!

$mem = memory_get_usage(); //$array = array('HTTP_X_REQUESTED_WITH' => NULL); $_SERVER['HTTP_X_REQUESTED_WITH'] = NULL; //$_GET['HTTP_X_REQUESTED_WITH'] = NULL; print (memory_get_usage() - $mem).' bytes<br>'; print memory_get_usage().' bytes (process)<br>'; print memory_get_peak_usage(TRUE). ' bytes (process peak)<br>'; print (memory_get_usage() - $mem).' bytes<br>'; 

What makes the $ _SERVER array use so much extra memory in the world?

+4
source share
2 answers

Mike explains how PHP dynamically allocates an internal hash table for arrays. Doubling the size is very important for the dynamic distribution of arrays.

However, the superhalogs $ _SERVER, $ _REQUEST, $ _POST, $ _GET and $ _ENV have a fixed size when the script runs. They are also usually not edited (I dissuade).

Most likely, they are created using hash tables that are large enough to fit their current size. Any addition then runs a dynamic expansion algorithm to rebuild and copy to the hash table.

+2
source

I would not worry about such low-level details as if I were developing in PHP. Most likely, you are faced with a bandwidth limit for $ _SERVER, and PHP should create a new hash table that is double the current hash table. Since these are ordered linked arrays, for each element of the hash table there is quite a lot of overhead, not even filled places.

If you are interested in the mechanics of this process, they are available in zend_hash.c , line 418.

To test this, take var_dump of your $ _SERVER, then put it in a script. Remember to not just check the dummy hash table for several reasons: (1) there are actually different code paths for the β€œdynamic arrays” of php and php hash tables (they convert them for you) and (2) the problem may be copying a lot rows into a new hash table to avoid thread safety or pointer overhead.

+4
source

All Articles