Introduction to Amortized Analysis
4

Asymptotic analysis of algorithms is the most common way to analyze the complexity of algorithms. It's basically a method of describing limiting behavior, when the size of the input $n$ approaches $\infty$. Worst case asymptotic analysis $O()$, one of the most popular ways of studying complexity of any algorithm, analyzes the complexity by looking at the operation which takes the maximum amount of time.

However, in some cases, the worst case analysis doesn't really give a pragmatic view on the complexity of an algorithm. Let us take an example which illustrates the same. Consider a binary counter which counts from 0 to n and has $log_2(n)$ bits. Now if we look at the worst case asymptotic complexity of each operation in the counter, it comes out to be $O(log(n))$ since increment from 01111....111 by 1 to 10000....000 changes all of the $log(n)$ bits of the counter. But if we look at the sequence of operations where in counter is incremented from 0 to n, is it really $n \times log(n)$ operations?

alt text

Turns out that it is not even close to $nlog(n)$ operations. To calculate the same, let us take a 4 bit counter example :

0000 (0) -> 0001 (1) -> 0010 (2) -> 0011 (1) -> 0100 (3) -> 0101 (1) -> 0110 (2) -> 0111 (1) -> 1000 (4) -> 1001 (1) -> 1010 (2) -> 1011 (1) -> 1100 (3) -> 1101 (1) -> 1110 (2) -> 1111 (1) -> 0000(4)

Here, () denotes the number of bits flipped to reach this state. Hence, for the total of 16 count operations, the total number of bits flipped are 30. Approximately twice the number of operations. Can we come up with a general expression to analyze the same for any number of count operations?

Careful analysis reveals a pattern in the above example. Bit 0, the least significant bit, gets flipped every time the counter changes its state, bit 1 gets flipped alternatively, bit 2 gets flipped once in 4 operations, bit 3 does that once in 8 operations and so on. In general, $i^{th}$ bit from the least significant bit gets flipped once in every $2^i$ operations. Hence, the total no. of flips are $n + \frac{n}{2} + \frac{n}{4} + ... + \frac{n}{n}$. This is a geometric progression series and its sum approaches to $2n$. Hence, on an average in a sequence of $n$ count operations, each operation involves just 2 bit flips.

So given the knowledge of the nature of work that's being done in an algorithm we can better judge its complexity. Here, worst case asymptotic complexity of $O(nlog(n))$ suggests counter to be a more heavy procedure than it actually is, which is just about $O(n)$ in amortized setting.

The key here is that we average out the complexity of a sequence of operations over all the operations performed. Had the counter been toggling between 7: 0111 in binary and 8: 1000 in binary, repeatedly, there would have been 4 bit changes in each operation, and thus the average or amortized complexity per operation would have been same as the asymptotic complexity of $log_2(n)$.

The above method of analysing amortized complexity is called aggregate analysis. Although each operation incurs different cost, which was the number of bit flips in the above example, aggregate analysis considers each operation's amortized cost to be same. Next, we discuss another method of analysing amortized complexity, wherein all operations need not have same amortized cost.

Consider a stack which supports the following operations:

  1. Push: Adding a new element on the top of the stack, $O(1)$ operation.

  2. Pop: Removing an element from the top of the stack, $O(1)$ operation.

  3. MultiPop$(k)$: Removing top k elements from the stack; k being positive.

alt text

The asymptotic complexity of MultiPop operation seems to be $O(k)$, since it is equivalent to popping k elements contiguously. In the worst case 'k' could be equal to 'n', making the worst case cost of any stack operation to be $O(n)$, thereby suggesting stack's worst case complexity to be $O(n^2)$ for any 'n' operations. But given the knowledge about the problem, i.e. a single stack supporting these 3 operations, what does amortized amalysis have to say?

We can sense that this analysis is not 'tight', since we cannot have two contiguous MultiPop$(n)$ operations. If there are only 'n' elements pushed, there could only be one MultiPop operation removing all of them. Hence all the other operations are just $O(1)$ while there's just one multipop, $O(n)$ operation, averaging it out suggests we'll still get a constant time complexity rather than linear one.

The way we prove it is by following Accounting Method of amortized analysis. It involves assigning different costs to different operations, called their amortized costs. Let us have the following assignment:

  1. Push: 2 whereas the actual cost is 1.

  2. Pop: 0 whereas the actual cost is 1.

  3. MultiPop(k): 0 whereas the actual cost is k.

Note that the amortized cost assignment could be more or less than the original cost of the operation. The idea is to show that for any given operations sequence, we provide an upper bound of actual cost incurred with our amortized cost assignment.

So in this case, given any valid sequence of stack operations, for example:

Push 'a'-> Push 'b'-> Push 'c'-> Pop() -> Push 'd' -> Push 'e' -> MultiPop(4) -> Push 'f' -> Pop()

We can calculate the amortized cost as: 2+2+2+0+2+2+0+2+0 = 12 The actual cost in this case is 1+1+1+1+1+1+4+1+1 also equal to 12.

The cost is 12 for 6 operations, suggesting a constant amortized cost per operation , $O(1)$. Here we have found a perfect matching, where our amortized cost is giving us the exact cost that's incurred rather than just the upper bound. Generally, upper bound is easier to achieve.

The simple knowledge, that without pushing an element into the stack, we cannot pop it, led us into assigning a cost of 2 to push and 0 to the two kinds of popping operations. The extra cost of push operations will pay for the costs of pop operation. This is one of the simplest examples of course, but given the knowledge of the problem we can devise clever assignments to get as tight upper bounds as possible.

In conclusion, we can say that amortized analysis is a more realistic way of looking at certain algorithms than asymptotic analysis, especially when the algorithm takes variable amount of time in different operations. For further reading, interested people can refer to the book "Introduction to Algorithms" by Cormen et. al.

Advanced data structures like Fibonacci Heaps, Union-Find etc. are analyzed using amortized analysis as the worst case asymptotic complexities are unable to express their efficiency. These will be taken up in the upcoming articles.

This article is co-authored by Aakriti


The author of this article is Mohit Dhingra. Here is Mohit Dhingra's bio:

I am Masters by Research student at Indian Institute of Science, Bangalore. I am basically a computer systems guy and have also worked on applications of theoretical computer science. I have been teaching OS, Algorithms among other subjects to undergrad students on weekends.


Sign up for free to share your opinion. Already have an account? Sign in to post opinion