I have an interesting problem. I came across a function that takes a long time to calculate a value based on some index. Name it takes_a_long_time(index) . The values returned by this function are guaranteed to have a global minimum, but there is no guarantee that the index associated with it will be close to zero.
Since takes_a_long_time accepts arbitrarily large positive numbers as its index, there are unique restrictions on how to start a binary search. I need a way to create a finite search interval for an exact minimum. My first thought was to check for ever larger intervals, starting from zero. Sort of:
def find_interval_with_minimum(): start = 0 end = 1 interval_size = 1 minimum_in_interval = check_minimum_in(start, end) while not minimum_in_interval: interval_size = interval_size * 2 start = end end = start + interval_size minimum_in_interval = check_minimum_in(start, end) return start, end
It would seem that this works fine, but there is additional information that really drops things. takes_a_long_time takes exponentially more time to calculate the value, as the indices are approaching zero. Since check_minimum_in will require several calls to takes_a_long_time , I would like it to not start from scratch.
So my question is that the minimum can be anywhere at [0, + infinity), is there a reasonable way to run this “back”? Or is there a good heuristic to avoid checking low indexes if this is not necessary?
I would like a language agnostic solution. However, I am writing this in Python, so if there is a specific approach to python, I would accept this as well.
source share