If you look at the notes in the source code of the Python dictionary , I think the relevant points are as follows:
These methods (iteration and key list) span every potential entry
How many potential entries will there be as a function of the largest size (the largest number of keys ever stored in this dictionary)? Look at the following two sections in the same document:
Maximum dictionary loading in PyDict_SetItem. Currently set to 2/3
Growth rate at maximum load. Currently set to * 2.
This suggests that the sparseness of the dictionary will be somewhere around 1/3 ~ 2/3 (if the growth rate is not set to * 4, then this is 1/6 ~ 2/3). Thus, basically you will check up to 3 (or 6, if * 4) potential entries for each key.
Of course, whether it's 3 records or 1000, it's still O (n), but 3 seems like a pretty acceptable constant factor.
By the way, here are the rest of the sources and documentation, including the DictObject document:
http://svn.python.org/projects/python/trunk/Objects/
source share