Asymptotic notation represents the rate-of-growth of the running time of an algorithm as the algorithm's input size
n
becomes large.
To find asymptotic notation, we express the number of steps in an algorithm as an algebraic function. Then we drop the constant factors and low-order terms. This leaves us with a simplified function such as
f(n log n)
.
We commonly express the rate of growth of an algorithm's running time in big notation.
-
Big-Theta
(tight bound)
θgives a growth rate that is very close to the actual growth rate of an algorithm. -
Big-O
(upper bound) gives a growth rate that an algorithm will never exceed. An algorithm can have many
Ofunctions of varying precision. That is, if an algorithm's growth rate will never exceedf(1), then it will also never exceedf(n), and will certainly never exceedf(2^n). -
Big-Omega
(lower bound) gives a growth rate that an algorithm will
always
exceed. There can be many
Ωfunctions (of varying precision) for an algorithm, because if an algorithm's growth rate will always exceedf(2^n), then it will also always exceedf(n), and will certainly always exceedf(1).
We can use Big-Theta, Big-O, and Big-Omega to express an algorithm's worst case, best case, and average running time.
If this does not make sense, check out the Khan Academy's notes on Asymptotic Notation . And, if something doesn't seem quite right, critique me on Twitter @dicshaunary .