Asymptotic notation represents the rate-of-growth of the running time of an algorithm as the algorithm's input size n becomes large.

To find asymptotic notation, we express the number of steps in an algorithm as an algebraic function. Then we drop the constant factors and low-order terms. This leaves us with a simplified function such as f(n log n).

We commonly express the rate of growth of an algorithm's running time in big notation.

  • Big-Theta (tight bound) θ gives a growth rate that is very close to the actual growth rate of an algorithm.
  • Big-O (upper bound) gives a growth rate that an algorithm will never exceed. An algorithm can have many O functions of varying precision. That is, if an algorithm's growth rate will never exceed f(1), then it will also never exceed f(n), and will certainly never exceed f(2^n).
  • Big-Omega (lower bound) gives a growth rate that an algorithm will always exceed. There can be many Ω functions (of varying precision) for an algorithm, because if an algorithm's growth rate will always exceed f(2^n), then it will also always exceed f(n), and will certainly always exceed f(1).

We can use Big-Theta, Big-O, and Big-Omega to express an algorithm's worst case, best case, and average running time.

If this does not make sense, check out the Khan Academy's notes on Asymptotic Notation. And, if something doesn't seem quite right, critique me on Twitter @dicshaunary.