How to calculate binary search complexity

I heard someone say that since binary search reduces the input needed to search, this is the log (n) algorithm. Since I am not from a mathematical background, I cannot relate to it. Can someone explain this in a bit more detail? Do I need to do something with a logarithmic series?

+128
algorithm time-complexity search binary-search
Nov 18 '11 at 15:50
source share
13 answers

Here's a more mathematical way to see it, although not very complicated. IMO is much clearer than informal:

The question is, how many times can you divide N by 2 until you get 1? Essentially, this means that you do a binary search (half the elements) until you find it. In the formula, it will be as follows:

1 = N / 2 x

multiply by 2 x :

2 x = N

now run log 2 :

log 2 (2 x ) = log 2 N
x * log 2 (2) = log 2 N
x * 1 = log 2 N

this means that you can split the log N times until you split everything. This means that you must split the N log ("perform binary search step") until you find your item.

+351
Nov 18 '11 at 16:10
source share

For binary search, T (N) = T (N / 2) + O (1) // recurrence relation

Apply the Wizard theorem to calculate the complexity of the execution time of the recurrence relations: T (N) = aT (N / b) + f (N)

Here a = 1, b = 2 => log (base b) = 1

here f (N) = n ^ c log ^ k (n) // k = 0 and c = log (base b)

So T (N) = O (N ^ c log ^ (k + 1) N) = O (log (N))

Source: http://en.wikipedia.org/wiki/Master_theorem

+19
Feb 26 '13 at 23:40
source share

T (n) = T (n / 2) + 1

T (n / 2) = T (n / 4) + 1 + 1

Put the value of The (n / 2) in the above, so that T (n) = T (n / 4) + 1 + 1,,,, T (n / 2 ^ k) + 1 + 1 + 1 + 1 .. ...

= T (2 ^ k / 2 ^ k) + 1 + 1 .... + 1 to k

= T (1) + k

How we took 2 ^ k = n

K = log n

So the time complexity is O (log n)

+15
Sep 20 '16 at 21:31
source share

This is not half the search time, it will not make it log (n). It reduces it logarithmically. Think about it for a moment. If you had 128 entries in the table and you had to search linearly for your value, it would probably take about 64 entries to find your value. This is n / 2 or linear time. With a binary search, you delete 1/2 of the possible entries for each iteration, so in most cases there are only 7 comparisons to find your value (a base base of 2 out of 128 is 7 or 2, and 7 is 128.) is the power of a binary search.

+10
Nov 18 '11 at 15:56
source share

The time complexity of the binary search algorithm belongs to the class O (log n). This is called the capital letter O. The way you should interpret this is that the asymptotic increase in the execution time of a function for a given input set of size n will not exceed log n .

This is just formal mathematical jargon in order to be able to prove statements, etc. He has a very simple explanation. When n becomes very large, the log n function will increase the time required to execute the function. The size of the "input set", n, is simply the length of the list.

Simply put, the reason for the binary search in O (log n) is that it halves the input set in each iteration. It’s easier to think about it in the opposite situation. At x iterations, what long list can the binary search algorithm check in max? The answer is 2 ^ x. This shows that vice versa - on average, the binary search algorithm requires log2 n iterations for a list of length n.

If, why is it O (log n), and not O (log2 n), it is because, more simply, again - Using large constants of the notation O is not considered.

+5
Nov 18 2018-11-11T00:
source share

Log2 (n) is the maximum number of searches needed to search for something in a binary search. The middle case includes a search for log2 (n) -1. Here is additional information:

http://en.wikipedia.org/wiki/Binary_search#Performance

+3
Nov 18 '11 at 3:55 a.m.
source share

Here is the wikipedia entry

If you look at a simple iterative approach. You simply eliminate half of the items to search until you find the item you want.

Here is an explanation of how we come up with the formula.

Say first that you have N number of elements, and then you do ⌊N / 2⌋ as your first attempt. Where N is the sum of the lower bound and the upper bound. The first value of N will be (L + H), where L is the first index (0) and H is the last index of the list you are looking for. If you're lucky, the item you are trying to find will be in the middle [for example. You search 18 in the list {16, 17, 18, 19, 20}, then you calculate (0 + 4) / 2⌋ = 2, where 0 is the lower bound (L is the index of the first element of the array) and 4 is the upper bound (H is the index of the last element of the array). In the above case, L = 0 and H = 4. Now 2 is the index of the element you found 18. Bingo! You have found it.

If the case were a different array {15,16,17,18,19}, but you were still looking for 18, then you would be out of luck and you would do N / 2 first (this is ⌊ (0+ 4) / 2⌋ = 2, and then implementing element 17 in index 2 is not the number you are looking for. Now you know that you do not need to search at least half the array the next time you try to search in an iterative manner. Basically you are not looking at half the list of elements that you searched earlier, every time you try to find an item that you could not find in your previous attempt.

In the worst case, it will be

[N] / 2 + [(N / 2)] / 2 + [((N / 2) / 2)] / 2 .....
ie:
N / 2 1 + N / 2 2 + N / 2 3 + ..... + N / 2 x .....

until ... you have finished searching where in the element you are trying to find is at the end of the list.

This shows the worst case when you reach N / 2 x where x is such that 2 x = N

In other cases, N / 2 x where x is such that 2 x N The minimum value of x may be 1, which is the best case.

Now, since the mathematically worst case is when the value is 2 x = N
=> log 2 (2 x ) = log 2 (N)
=> x * log 2 (2) = log 2 (N)
=> x * 1 = log 2 (N)
=> More formally ⌊log 2 (N) + 1⌋

+3
Apr 14 '15 at 18:22
source share

Binary search works by repeatedly dividing the problem by half, something like this (details are omitted):

Search example 3 in [4,1,3,8,5]

  • Order Product List [1,3,4,5,8]
  • Look at the middle element (4),
    • If this is what you are looking for, stop
    • If it's more, look at the first half
    • If it is less, look at the second half.
  • Repeat step 2 with the new list [1, 3], find 3 and stop

This is a bi-directional search when you divide the problem by 2.

Only the steps log2 (n) are required to find the correct search value.

I would recommend an Introduction to Algorithms if you want to learn about algorithmic complexity.

+1
Nov 18 '11 at 16:03
source share

Since we shorten the list to half each time, so we just need to know how many steps we get 1 when we continue to split the list into two. In the calculation below, x denotes the amount of time we divide the list until we get one item (in the worst case).

1 = N / 2x

2x = N

Taking log2

log2 (2x) = log2 (N)

x * log2 (2) = log2 (N)

x = log2 (N)

+1
Jan 31 '17 at 11:26
source share

T (H) = T (N / 2) + 1

T (N) = T (N / 2) + 1 = (T (N / 4) + 1) + 1

...

T (N) = T (N / N) + (1 + 1 + 1 + ... + 1) = 1 + logN (base 2 log) = 1 + logN

Thus, the time complexity of a binary search is O (logN)

+1
Sep 28 '18 at 1:14
source share
 ok see this for(i=0;i<n;n=n/2) { i++; } 1. Suppose at i=k the loop terminate. ie the loop execute k times. 2. at each iteration n is divided by half. 2.an=n/2 .... AT I=1 2.bn=(n/2)/2=n/(2^2) 2.cn=((n/2)/2)/2=n/(2^3)....... aT I=3 2.dn=(((n/2)/2)/2)/2=n/(2^4) So at i=k , n=1 which is obtain by dividing n 2^k times n=2^k 1=n/2^kk=log(N) //base 2 
0
Sep 10 '18 at 17:25
source share

Let me simplify the example for all of you.

For simplicity, suppose the array has 32 elements in sorted order, from which we are looking for an element using binary search.

1 2 3 4 5 6 ... 32

Suppose we are looking for 32. after the first iteration we will stay with

17 18 19 20 .... 32

after the second iteration we will stay with

25 26 27 28 .... 32

after the third iteration we will stay with

29 30 31 32

after the fourth iteration we will stay with

31 32

In the fifth iteration, we find the value 32.

So, if we convert this to a mathematical equation, we get

(32 X (1/2 5 )) = 1

=> n X (2 -k ) = 1

=> (2 k ) = n

=> k log 2 2 = log 2 n

=> k = log 2 n

Hence the proof.

0
May 11 '19 at 12:59
source share

Suppose an iteration in a binary search ends after k iterations. At each iteration, the array is divided in half. So, let's say that the length of the array at any iteration is n At iteration 1,

 Length of array = n 

At iteration 2

 Length of array = n⁄2 

At iteration 3

 Length of array = (n⁄2)⁄2 = n⁄22 

Therefore, after iteration k

 Length of array = n⁄2k 

In addition, we know that after k divisions, the length of the array becomes 1, therefore

 Length of array = n⁄2k = 1 => n = 2k 

Application of the log function on both sides:

 => log2 (n) = log2 (2k) => log2 (n) = k log2 (2) As (loga (a) = 1) 

Hence,

 As (loga (a) = 1) k = log2 (n) 

Therefore, the time complexity of binary search

 log2 (n) 
0
Jun 19 '19 at 3:57
source share



All Articles