Time complexity for iterating through a list of arrays

I have a list of arrays that I iterate over. In each iteration, I call get() to get one element, and if this element passes some condition, it is added to the new list of arrays using add()

 List<Item> items = new ArrayList<Item>(); List<Item> lessItems = new ArrayList<Item>(); for(int index = 0; index < items.size(); index++){ Item toCheck = items.get(i); if(toCheck meets some condition){ lessItems.add(toCheck); } } 

I'm not sure what time complexity is. I call get () for all elements, so this is O (n). Then I also call add () for potentially all the elements so that there is another O (n). Not too sure about that.

+5
source share
3 answers

All other answers are incorrect .

  • Your first loop to repeat a list of items : O(n) complexity
  • Insert each element at the end of the lessItems list: in a regular array, it will be O(n) , as others have said. But Java implements for ArrayList with a cushioned array . This means that when you insert at the end of the array, the algorithm costs only Amortized O(1) . Or you can say O(1)

So the complexity of your code is: O(n) * amortized O(1) . In short, O(n)

Another link:

dynamic array

Additional note 1:

If the insertion complexity at the end of the array is O(n) , so the overall complexity is O(N^2) , not O (2 * N), as the other answers say. Since the total difficulty to insert will be: 1 + 2 + 3 + ...+ n = n*(n+1)/2

Addtional Note 2:

as an official document :

The operations size, isEmpty, get, set, iterator and listIterator are performed in constant time. The add operation operates in amortized constant time mode , i.e. adding n elements takes O (n) time. All other operations are performed in linear time (roughly). The constant ratio is low compared to the LinkedList implementation.

Additional note 3:

Here is the logic of the grow method I took from the official Java source code:

 private void ensureExplicitCapacity(int minCapacity) { modCount++; // overflow-conscious code if (minCapacity - elementData.length > 0) grow(minCapacity); } private void grow(int minCapacity) { // overflow-conscious code int oldCapacity = elementData.length; int newCapacity = oldCapacity + (oldCapacity >> 1); if (newCapacity - minCapacity < 0) newCapacity = minCapacity; if (newCapacity - MAX_ARRAY_SIZE > 0) newCapacity = hugeCapacity(minCapacity); // minCapacity is usually close to size, so this is a win: elementData = Arrays.copyOf(elementData, newCapacity); } 

As the source code said, when an application element is added whose array size exceeds the current capacity, the array will be increased. The new size of the grown array will be:

 int newCapacity = oldCapacity + (oldCapacity >> 1); 

This is the trick that makes Amortized O(1) insert

+5
source

You are doing an iteration, and O (n).

You also add elements to an ArrayList that has O (1) ( Amortized )

Getting the index is also O (1).

So, you do O (n) times, O (1) operations, which will have the value O (n) .

+2
source

Big-O and similar notations are asymptotic estimates of time complexity. They discard the numerical coefficient and are used to estimate the runtime depending on the size of the input.

So 2*n , 3*n , etc. represented as O(n) and 2*nlog(n) , 3*nlog(n) , etc. represented as O(nlog(n)) .

Since the add () operation inserts only one element in this case, its execution time is approximately (some small constant)k*1 , which gives the total execution time (some constant)j*(n+some constant(k)) , others words j*n or O(n) .

In this case, and in all such cases, any constant k multiplied by n will be represented as O(n) , which means that the execution time varies linearly with the size of the input array ArrayList.

0
source

All Articles