Amortized time is explained in simple words:
If you perform an operation, say, a million times, you don’t care about the worst case or, at best, this operation, what you care about is how long it takes the total time when you repeat the operation a million times.
So it doesn’t matter if the operation is very slow once in a while, while “occasionally” is rare enough for the slowness to be diluted. Essentially amortized time means "average time spent on an operation if you perform many operations." Amortized time does not have to be constant; you can have linear and logarithmic amortized time or something else.
Let’s take an example of mats of a dynamic array to which you re-add new elements. Usually adding an item takes constant time (i.e. O (1)). But each time the array is full, you allocate twice as much space, copy your data to the new region and free up the old space. Assuming that allocations and frees are performed in constant time, this expansion process takes O (n) time, where n is the current size of the array.
So, each time you increase, you take about twice as much time as the last one. But you also waited twice before doing this! Thus, the cost of each extension can be "distributed" among the inserts. This means that in the long run, the total time spent adding m elements to the array is O (m), and therefore the amortized time (i.e., insertion time) is O (1).
Sarvesh
source share