The effect of memory usage on algorithm complexity

I am reading a book by Nikolai Josuttis on C ++ STL algorithms. For many algorithms, such as stable_sort (), he mentions that the complexity of the algorithm is n * log (n), if enough memory is available, otherwise it is n * log (n) * log (n). My question is: how does memory usage affect complexity? And how does the STL detect such a situation?

+4
source share
1 answer

Looking at gcc STL, you will find inplace_merge at stl_algo.h . This is a traditional merge merge sort merge implementation with O (N), using a buffer of the same size as the input. This buffer is allocated through _Temporary_buffer , from stl_tempbuf.h . This calls get_temporary_buffer , which ultimately calls a new one. If this throws an exception, the exception is thrown and the buffer is NULL, which is the case of "insufficient memory". In this case, the merge works with __merge_without_buffer , which is O (N lg N). Since the merge sort recursion depth is O (log N N), you get O (N log N N) in the case of a “traditional” merge (with buffer) and O (N log N log N) in the non-buffered version.

+12
source

All Articles