Java: ArrayList add () and remove () performance, implementation?

I read somewhere that the operations of ArrayList add () and remove () are performed in the "amortized constant" mode. What does it mean?

In the implementation of add (item), I see that ArrayList uses an array buffer that does not exceed 3/2 of the size of list't, and if it is full, System.arraycopy () is called, which should be executed in O (n), not O ( 1 time. This means that System.arraycopy is trying to do something smarter than copying elements one by one into a newly created array, since in fact it is O (1) time?


Conclusion: add (item) works in an amortized time constant, but add (item, index) and remove (index) do not, they run in linear time (as explained in the answers).

+7
source share
6 answers

I read somewhere that the operations of ArrayList add () and remove () are performed in the "amortized constant" mode.

I do not think this is true for remove() , with the exception of unusual conditions.

  • A remove(Object) call of a random element on average should call equals on half of the entries in the list, and then copy the links for the other half.

  • A remove(int) call for a random element on average should copy links for half of the elements.

The only cases where remove(...) will be on average O(1) (for example, amortized) is that you use remove(int) to remove items with some constant offset from the end of the list.

+6
source

"Amortized" means "averaged over the entire length of time." Yes, the copy array will be O (n). But this only happens when the list is full, which happens 1 time in n times.

+3
source

I think that amortized constant time simply means that it is a fairly constant time if you perform tons of operations. Thus, in one test there are a million items in the list, then in another test add two million items to the list. The latter should be approximately 2 times slower than the first, therefore, constant time is amortized.

0
source

Amortized constant time is different from constant time.

Basically, amortized O (1) means that during n operations the average execution time for any operation is O (1).

For array lists, this works something like this:

(O (1) insert + O (1) insert + ... + O (n) copy of the array) / n operations = O (1)

0
source

Direct value description Amortized constant in flow Amortized constant time

0
source

Amortized time is explained in simple words:

If you perform an operation, say, a million times, you don’t care about the worst case or, at best, this operation, what you care about is how long it takes the total time when you repeat the operation a million times.

So it doesn’t matter if the operation is very slow once in a while, while “occasionally” is rare enough for the slowness to be diluted. Essentially amortized time means "average time spent on an operation if you perform many operations." Amortized time does not have to be constant; you can have linear and logarithmic amortized time or something else.

Let’s take an example of mats of a dynamic array to which you re-add new elements. Usually adding an item takes constant time (i.e. O (1)). But each time the array is full, you allocate twice as much space, copy your data to the new region and free up the old space. Assuming that allocations and frees are performed in constant time, this expansion process takes O (n) time, where n is the current size of the array.

So, each time you increase, you take about twice as much time as the last one. But you also waited twice before doing this! Thus, the cost of each extension can be "distributed" among the inserts. This means that in the long run, the total time spent adding m elements to the array is O (m), and therefore the amortized time (i.e., insertion time) is O (1).

0
source

All Articles