I would not worry about such low-level details as if I were developing in PHP. Most likely, you are faced with a bandwidth limit for $ _SERVER, and PHP should create a new hash table that is double the current hash table. Since these are ordered linked arrays, for each element of the hash table there is quite a lot of overhead, not even filled places.
If you are interested in the mechanics of this process, they are available in zend_hash.c , line 418.
To test this, take var_dump of your $ _SERVER, then put it in a script. Remember to not just check the dummy hash table for several reasons: (1) there are actually different code paths for the βdynamic arraysβ of php and php hash tables (they convert them for you) and (2) the problem may be copying a lot rows into a new hash table to avoid thread safety or pointer overhead.
source share