As for which method will have the lowest time complexity, it depends a lot on what types of requests you plan to make.
If you plan to make queries with high indices (for example, the 36th largest item in the list with 38 items), your nth_largest(li,n) function will be close to O (n ^ 2) time complexity, since this will need to be done max, which is O (n), several times. It will be similar to a sorting selection algorithm, with the exception of using max() instead of min() .
On the other hand, if you only make outstanding index queries, then your function can be effective, because it will only use the O (n) max function several times, and the time complexity will be close to O (n). However, creating the maximum heap is possible in linear O (n) time, and you would be better off just using this. After you encounter the problem of creating a heap, all your max() operations on the heap will be O (1), which may be the best long-term solution for you.
I believe that the most scalable way (in terms of being able to query the nth largest element for any n) is to sort the list with O (n log n) time complexity using the built-in sort function, and then make O (1) queries from a sorted list. Of course, this is not the most effective method in terms of memory, but in terms of time complexity, it is very effective.
Shashank
source share