Big O notation
In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem.
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
The letter O is used because the growth rate of a function is also referred to as the order of the function.
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.
Big O notation is also used in many other fields to provide similar estimates.
In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that
The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say
In typical usage the O notation is asymptotical, that is, it refers to very large x.
In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant.
As a result, the following simplification rules can be applied:
- If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept, and all others omitted.
- If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) can be omitted.
For example, let f(x) = 6x − 2x + 5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity.
This function is the sum of three terms: 6x, −2x, and 5.
Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x.
Now one may apply the second rule: 6x is a product of 6 and x in which the first factor does not depend on x. Omitting this factor results in the simplified form x.
Thus, we say that f(x) is a "big O" of x.
Mathematically, we can write f(x) = O(x).
One may confirm this calculation using the formal definition: let f(x) = 6x − 2x + 5 and g(x) = x.
Applying the formal definition from above, the statement that f(x) = O(x) is equivalent to its expansion,
for some suitable choice of x0 and M and for all x > x0.
To prove this, let x0 = 1 and M = 13.
Then, for all x > x0:
Big O notation has two main areas of application:
- In mathematics, it is commonly used to describe how closely a finite series approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion
- In computer science, it is useful in the analysis of algorithms
In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
There are two formally close, but noticeably different, usages of this notation:
This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument.
Big O notation is useful when analyzing algorithms for efficiency.
For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n − 2n + 2.
As n grows large, the n term will come to dominate, so that all other terms can be neglected—for instance when n = 500, the term 4n is 1000 times as large as the 2n term.
Ignoring the latter would have negligible effect on the expression's value for most purposes.
Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n or n. Even if T(n) = 1,000,000n, if U(n) = n, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,000 = U(1,000,000)).
Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm.
So the big O notation captures what remains: we write either
and say that the algorithm has order of n time complexity.
The sign "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the "Equals sign" discussion below) while the first is considered by some as an abuse of notation.
Big O can also be used to describe the error term in an approximation to a mathematical function.
The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term.
Consider, for example, the exponential series and two expressions of it that are valid when x is small:
The second expression (the one with O(x)) means the absolute-value of the error e − (1 + x + x/2) is at most some constant times |x| when x is close enough to 0.
If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n).
In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity, one may disregard lower-order terms of the polynomial.
The sets O(n) and O(c) are very different.
If c is greater than one, then the latter grows much faster.
A function that grows faster than n for any c is called superpolynomial.
One that grows more slowly than any exponential function of the form c is called subexponential.
An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization and the function n.
We may ignore any powers of n inside of the logarithms.
The set O(log n) is exactly the same as O(log(n)).
The logarithms differ only by a constant factor (since log(n) = c log n) and thus the big O notation ignores that.
Similarly, logs with different constant bases are equivalent.
On the other hand, exponentials with different bases are not of the same order.
For example, 2 and 3 are not of the same order.
Changing units may or may not affect the order of the resulting algorithm.
Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears.
For example, if an algorithm runs in the order of n, replacing n by cn means the algorithm runs in the order of cn, and the big O notation ignores the constant c. This can be written as cn = O(n).
If, however, an algorithm runs in the order of 2, replacing n with cn gives 2 = (2).
This is not equivalent to 2 in general.
Changing variables may also affect the order of the resulting algorithm.
For example, if an algorithm's run time is O(n) when measured in terms of the number n of digits of an input number x, then its run time is O(log x) when measured as a function of the input number x itself, because n = O(log x).
Multiplication by a constant
if and only if
asserts that there exist constants C and M such that
where g(n,m) is defined by
This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition.
Matters of notation
The statement "f(x) is O(g(x))" as defined above is usually written as f(x) = O(g(x)).
Some consider this to be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have.
As de Bruijn says, O(x) = O(x) is true but O(x) = O(x) is not.
Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like n = n from the identities n = O(n) and n = O(n)."
For these reasons, it would be more precise to use set notation and write f(x) ∈ O(g(x)), thinking of O(g(x)) as the class of all functions h(x) such that |h(x)| ≤ C|g(x)| for some constant C. However, the use of the equals sign is customary.
Knuth pointed out that "mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle."
Other arithmetic operators
Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations.
For example, h(x) + O(f(x)) denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x).
expresses the same as
Suppose an algorithm is being developed to operate on a set of n elements.
Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set.
The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations.
The sort has a known time complexity of O(n), and after the subroutine runs the algorithm must take an additional 55n + 2n + 10 steps before it terminates.
Thus the overall time complexity of the algorithm can be expressed as T(n) = 55n + O(n).
Here the terms 2n+10 are subsumed within the faster-growing O(n).
Again, this usage disregards some of the formal meaning of the "=" symbol, but it does allow one to use the big O notation as a kind of convenient placeholder.
The meaning of such statements is as follows: for any functions which satisfy each O(...) on the left side, there are some functions satisfying each O(...) on the right side, such that substituting all these functions into the equation makes the two sides equal.
For example, the third equation above means: "For any function f(n) = O(1), there is some function g(n) = O(e) such that n = g(n)."
In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side.
In this use the "=" is a formal symbol that unlike the usual use of "=" is not a symmetric relation.
Thus for example n = O(e) does not imply the false statement O(e) = n
Orders of common functions
Further information: Time complexity § Table of common time complexities
Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm.
In each case, c is a positive constant and n increases without bound.
The slower-growing functions are generally listed first.
Related asymptotic notations
Big O is the most commonly used asymptotic notation for comparing functions.
Together with some other related notations it forms the family of Bachmann–Landau notations.
"Little o" redirects here.
For the baseball player, see Omar Vizquel.
Intuitively, the assertion "f(x) is o(g(x))" (read "f(x) is little-o of g(x)") means that g(x) grows much faster than f(x).
Let as before f be a real or complex valued function and g a real valued function, both defined on some unbounded subset of the positive real numbers, such that g(x) is strictly positive for all large enough values of x.
if for every positive constant ε there exists a constant N such that
For example, one has
Little-o respects a number of arithmetic operations.
It also satisfies a transitivity relation:
Big Omega notation
where a is some real number, ∞, or −∞, where f and g are real functions defined in a neighbourhood of a, and where g is positive in this neighbourhood.
When the two subjects meet, this situation is bound to generate confusion.
The Hardy–Littlewood definition
and more precisely
and more precisely
The Knuth definition
Family of Bachmann–Landau notations
Use in computer science
Further information: Analysis of algorithms
Informally, especially in computer science, the big O notation often can be used somewhat differently to describe an asymptotic tight bound where using big Theta Θ notation might be more factually appropriate in a given context.
For example, when considering a function T(n) = 73n + 22n + 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below).
- T(n) = O(n)
- T(n) = O(n)
- T(n) = Θ(n)
The equivalent English statements are respectively:
- T(n) grows asymptotically no faster than n
- T(n) grows asymptotically no faster than n
- T(n) grows asymptotically as fast as n.
So while all three statements are true, progressively more information is contained in each.
In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above).
For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound.
In a correct notation this set can, for instance, be called O(g), where
The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages.
Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the set O(g), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example:
Extensions to the Bachmann–Landau notations
Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) log g(n)) for some k. Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s).
This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log n is always o(n) for any constant k and any ε > 0).
Also the L notation, defined as
The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space.
A generalization to functions g taking values in any topological group is also possible.
The "limiting process" x → xo can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g. The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,
which is an equivalence relation and a more restrictive notion than the relationship "f is Θ(g)" from above.
(It reduces to lim f / g = 1 if f and g are positive real valued functions.)
For example, 2x is Θ(x), but 2x − x is not o(x).
History (Bachmann–Landau, Hardy, and Vinogradov notations)
Landau never used the big Theta and small omega symbols.
Hardy's symbols were (in terms of the modern O notation)
and frequently both notations are used in the same paper.
The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter.
Neither Bachmann nor Landau ever call it "Omicron".
The digit zero should not be used.
- Asymptotic expansion: Approximation of functions generalizing Taylor's formula
- Asymptotically optimal algorithm: A phrase frequently used to describe an algorithm that has an upper bound asymptotically within a constant of a lower bound for the problem
- Big O in probability notation: Op,op
- Limit superior and limit inferior: An explanation of some of the limit notation used in this article
- Master theorem (analysis of algorithms): For analyzing divide-and-conquer recursive algorithms using Big O notation
- Nachbin's theorem: A precise method of bounding complex analytic functions so that the domain of convergence of integral transforms can be stated
- Orders of approximation
- Computational complexity of mathematical operations
References and notes
Credits to the contents of this page go to the authors of the corresponding Wikipedia page: en.wikipedia.org/wiki/Big O notation.