You may not like this because they are probably looking for a smart solution, but sometimes it pays to stick to your pistols ... The hash table already satisfies the requirements - perhaps better than anything else (although, obviously, in the amortized constant time and with various compromises with other solutions).
The requirement that choosing a "random element" is difficult: in a hash table, you will need to scan or probe such an element.
For closed hashing / open addressing, the probability that any given bucket will be occupied is equal to size() / capacity() , but, as a rule, this is usually kept in a constant multiplicative range by implementing a hash table (for example, a table can stored more than its current contents, say, from 1.2x to ~ 10x depending on performance / memory settings). This means that on average we can expect a search from 1.2 to 10 buckets - completely regardless of the total size of the container; amortized to O (1).
I can imagine two simple approaches (and many more):
Not a great solution, but it can still be a better general compromise than the overhead of memory and performance to maintain a second array of indices at any time.
Tony Delroy Apr 18 '11 at 12:41 2011-04-18 12:41
source share