Statistical performance of purely functional maps and sets

Given the specification of a data structure, such as a purely functional map with known boundaries of complexity, you need to choose between several implementations. There is some folklore about how to choose the right one, for example, trees with red black are considered to be generally faster, but AVL trees have better performance under many-lookup workloads.

  • Is there a systematic exposition (published article) of this knowledge (in relation to sets / maps)? Ideally, I would like the statistical analysis to be performed on real software. For example, we can conclude that there are N typical uses of cards and a list of the distribution of input probabilities for each.

  • Are there systematic tests that check the card and establish performance for different input distributions?

  • Are there implementations that use adaptive algorithms to change the representation depending on the actual use?

+74
data-structures statistics functional-programming avl-tree red-black-tree
Apr 05 '13 at 16:44
source share
1 answer

These are mainly research topics, and the results are usually given in the form of conclusions, while statistics are hidden. However, you can carry out a statistical analysis of your own data.

For benchmarks, better read the implementation details.

The third part of the question is a very subjective question, and actual intentions can never be known at the time of implementation. However, languages ​​like perl do their best to implement highly optimized solutions for each operation.

Perhaps you will be helped by: Purely functional Chris Okasaki data structures http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf

+4
Nov 27 '13 at 17:22
source share



All Articles