Given the specification of a data structure, such as a purely functional map with known boundaries of complexity, you need to choose between several implementations. There is some folklore about how to choose the right one, for example, trees with red black are considered to be generally faster, but AVL trees have better performance under many-lookup workloads.
Is there a systematic exposition (published article) of this knowledge (in relation to sets / maps)? Ideally, I would like the statistical analysis to be performed on real software. For example, we can conclude that there are N typical uses of cards and a list of the distribution of input probabilities for each.
Are there systematic tests that check the card and establish performance for different input distributions?
Are there implementations that use adaptive algorithms to change the representation depending on the actual use?
data-structures statistics functional-programming avl-tree red-black-tree
t0yv0 Apr 05 '13 at 16:44 2013-04-05 16:44
source share