Big O notation

From Wikipedia for FEVERv2
Jump to navigation Jump to search

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. Big O notation_sentence_0

In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem. Big O notation_sentence_1

Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. Big O notation_sentence_2

The letter O is used because the growth rate of a function is also referred to as the order of the function. Big O notation_sentence_3

A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Big O notation_sentence_4

Associated with big O notation are several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates. Big O notation_sentence_5

Big O notation is also used in many other fields to provide similar estimates. Big O notation_sentence_6

Formal definition Big O notation_section_0

In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that Big O notation_sentence_7

The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say Big O notation_sentence_8

As g(x) is chosen to be non-zero for values of x sufficiently close to a, both of these definitions can be unified using the limit superior: Big O notation_sentence_9

if Big O notation_sentence_10

Example Big O notation_section_1

In typical usage the O notation is asymptotical, that is, it refers to very large x. Big O notation_sentence_11

In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. Big O notation_sentence_12

As a result, the following simplification rules can be applied: Big O notation_sentence_13

Big O notation_unordered_list_0

  • If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept, and all others omitted.Big O notation_item_0_0
  • If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) can be omitted.Big O notation_item_0_1

For example, let f(x) = 6x − 2x + 5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. Big O notation_sentence_14

This function is the sum of three terms: 6x, −2x, and 5. Big O notation_sentence_15

Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x. Big O notation_sentence_16

Now one may apply the second rule: 6x is a product of 6 and x in which the first factor does not depend on x. Omitting this factor results in the simplified form x. Big O notation_sentence_17

Thus, we say that f(x) is a "big O" of x. Big O notation_sentence_18

Mathematically, we can write f(x) = O(x). Big O notation_sentence_19

One may confirm this calculation using the formal definition: let f(x) = 6x − 2x + 5 and g(x) = x. Big O notation_sentence_20

Applying the formal definition from above, the statement that f(x) = O(x) is equivalent to its expansion, Big O notation_sentence_21

for some suitable choice of x0 and M and for all x > x0. Big O notation_sentence_22

To prove this, let x0 = 1 and M = 13. Big O notation_sentence_23

Then, for all x > x0: Big O notation_sentence_24

so Big O notation_sentence_25

Usage Big O notation_section_2

Big O notation has two main areas of application: Big O notation_sentence_26

Big O notation_unordered_list_1

In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms. Big O notation_sentence_27

There are two formally close, but noticeably different, usages of this notation: Big O notation_sentence_28

Big O notation_unordered_list_2

This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument. Big O notation_sentence_29

Infinite asymptotics Big O notation_section_3

Big O notation is useful when analyzing algorithms for efficiency. Big O notation_sentence_30

For example, the time (or the number of steps) it takes to complete a problem of size n might be found to be T(n) = 4n − 2n + 2. Big O notation_sentence_31

As n grows large, the n term will come to dominate, so that all other terms can be neglected—for instance when n = 500, the term 4n is 1000 times as large as the 2n term. Big O notation_sentence_32

Ignoring the latter would have negligible effect on the expression's value for most purposes. Big O notation_sentence_33

Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n or n. Even if T(n) = 1,000,000n, if U(n) = n, the latter will always exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,000 = U(1,000,000)). Big O notation_sentence_34

Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. Big O notation_sentence_35

So the big O notation captures what remains: we write either Big O notation_sentence_36

or Big O notation_sentence_37

and say that the algorithm has order of n time complexity. Big O notation_sentence_38

The sign "=" is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the "Equals sign" discussion below) while the first is considered by some as an abuse of notation. Big O notation_sentence_39

Infinitesimal asymptotics Big O notation_section_4

Big O can also be used to describe the error term in an approximation to a mathematical function. Big O notation_sentence_40

The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Big O notation_sentence_41

Consider, for example, the exponential series and two expressions of it that are valid when x is small: Big O notation_sentence_42

The second expression (the one with O(x)) means the absolute-value of the error e − (1 + x + x/2) is at most some constant times |x| when x is close enough to 0. Big O notation_sentence_43

Properties Big O notation_section_5

If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n). Big O notation_sentence_44

For example, Big O notation_sentence_45

In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity, one may disregard lower-order terms of the polynomial. Big O notation_sentence_46

The sets O(n) and O(c) are very different. Big O notation_sentence_47

If c is greater than one, then the latter grows much faster. Big O notation_sentence_48

A function that grows faster than n for any c is called superpolynomial. Big O notation_sentence_49

One that grows more slowly than any exponential function of the form c is called subexponential. Big O notation_sentence_50

An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization and the function n. Big O notation_sentence_51

We may ignore any powers of n inside of the logarithms. Big O notation_sentence_52

The set O(log n) is exactly the same as O(log(n)). Big O notation_sentence_53

The logarithms differ only by a constant factor (since log(n) = c log n) and thus the big O notation ignores that. Big O notation_sentence_54

Similarly, logs with different constant bases are equivalent. Big O notation_sentence_55

On the other hand, exponentials with different bases are not of the same order. Big O notation_sentence_56

For example, 2 and 3 are not of the same order. Big O notation_sentence_57

Changing units may or may not affect the order of the resulting algorithm. Big O notation_sentence_58

Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. Big O notation_sentence_59

For example, if an algorithm runs in the order of n, replacing n by cn means the algorithm runs in the order of cn, and the big O notation ignores the constant c. This can be written as cn = O(n). Big O notation_sentence_60

If, however, an algorithm runs in the order of 2, replacing n with cn gives 2 = (2). Big O notation_sentence_61

This is not equivalent to 2 in general. Big O notation_sentence_62

Changing variables may also affect the order of the resulting algorithm. Big O notation_sentence_63

For example, if an algorithm's run time is O(n) when measured in terms of the number n of digits of an input number x, then its run time is O(log x) when measured as a function of the input number x itself, because n = O(log x). Big O notation_sentence_64

Product Big O notation_section_6

Sum Big O notation_section_7

Multiplication by a constant Big O notation_section_8

Multiple variables Big O notation_section_9

if and only if Big O notation_sentence_65

asserts that there exist constants C and M such that Big O notation_sentence_66

where g(n,m) is defined by Big O notation_sentence_67

This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition. Big O notation_sentence_68

Matters of notation Big O notation_section_10

Equals sign Big O notation_section_11

The statement "f(x) is O(g(x))" as defined above is usually written as f(x) = O(g(x)). Big O notation_sentence_69

Some consider this to be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. Big O notation_sentence_70

As de Bruijn says, O(x) = O(x) is true but O(x) = O(x) is not. Big O notation_sentence_71

Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like n = n from the identities n = O(n) and n = O(n)." Big O notation_sentence_72

For these reasons, it would be more precise to use set notation and write f(x) ∈ O(g(x)), thinking of O(g(x)) as the class of all functions h(x) such that |h(x)| ≤ C|g(x)| for some constant C. However, the use of the equals sign is customary. Big O notation_sentence_73

Knuth pointed out that "mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle." Big O notation_sentence_74

Other arithmetic operators Big O notation_section_12

Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. Big O notation_sentence_75

For example, h(x) + O(f(x)) denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x). Big O notation_sentence_76

Thus, Big O notation_sentence_77

expresses the same as Big O notation_sentence_78

Example Big O notation_section_13

Suppose an algorithm is being developed to operate on a set of n elements. Big O notation_sentence_79

Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. Big O notation_sentence_80

The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. Big O notation_sentence_81

The sort has a known time complexity of O(n), and after the subroutine runs the algorithm must take an additional 55n + 2n + 10 steps before it terminates. Big O notation_sentence_82

Thus the overall time complexity of the algorithm can be expressed as T(n) = 55n + O(n). Big O notation_sentence_83

Here the terms 2n+10 are subsumed within the faster-growing O(n). Big O notation_sentence_84

Again, this usage disregards some of the formal meaning of the "=" symbol, but it does allow one to use the big O notation as a kind of convenient placeholder. Big O notation_sentence_85

Multiple uses Big O notation_section_14

The meaning of such statements is as follows: for any functions which satisfy each O(...) on the left side, there are some functions satisfying each O(...) on the right side, such that substituting all these functions into the equation makes the two sides equal. Big O notation_sentence_86

For example, the third equation above means: "For any function f(n) = O(1), there is some function g(n) = O(e) such that n = g(n)." Big O notation_sentence_87

In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. Big O notation_sentence_88

In this use the "=" is a formal symbol that unlike the usual use of "=" is not a symmetric relation. Big O notation_sentence_89

Thus for example n = O(e) does not imply the false statement O(e) = n Big O notation_sentence_90

Typesetting Big O notation_section_15

Orders of common functions Big O notation_section_16

Further information: Time complexity § Table of common time complexities Big O notation_sentence_91

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. Big O notation_sentence_92

In each case, c is a positive constant and n increases without bound. Big O notation_sentence_93

The slower-growing functions are generally listed first. Big O notation_sentence_94

Related asymptotic notations Big O notation_section_17

Big O is the most commonly used asymptotic notation for comparing functions. Big O notation_sentence_95

Together with some other related notations it forms the family of Bachmann–Landau notations. Big O notation_sentence_96

Little-o notation Big O notation_section_18

"Little o" redirects here. Big O notation_sentence_97

For the baseball player, see Omar Vizquel. Big O notation_sentence_98

Intuitively, the assertion "f(x) is o(g(x))" (read "f(x) is little-o of g(x)") means that g(x) grows much faster than f(x). Big O notation_sentence_99

Let as before f be a real or complex valued function and g a real valued function, both defined on some unbounded subset of the positive real numbers, such that g(x) is strictly positive for all large enough values of x. Big O notation_sentence_100

One writes Big O notation_sentence_101

if for every positive constant ε there exists a constant N such that Big O notation_sentence_102

For example, one has Big O notation_sentence_103

Little-o respects a number of arithmetic operations. Big O notation_sentence_104

For example, Big O notation_sentence_105

It also satisfies a transitivity relation: Big O notation_sentence_106

Big Omega notation Big O notation_section_19

where a is some real number, ∞, or −∞, where f and g are real functions defined in a neighbourhood of a, and where g is positive in this neighbourhood. Big O notation_sentence_107

The first one (chronologically) is used in analytic number theory, and the other one in computational complexity theory. Big O notation_sentence_108

When the two subjects meet, this situation is bound to generate confusion. Big O notation_sentence_109

The Hardy–Littlewood definition Big O notation_section_20

Simple examples Big O notation_section_21

We have Big O notation_sentence_110

and more precisely Big O notation_sentence_111

We have Big O notation_sentence_112

and more precisely Big O notation_sentence_113

however Big O notation_sentence_114

The Knuth definition Big O notation_section_22

Family of Bachmann–Landau notations Big O notation_section_23

Use in computer science Big O notation_section_24

Further information: Analysis of algorithms Big O notation_sentence_115

Informally, especially in computer science, the big O notation often can be used somewhat differently to describe an asymptotic tight bound where using big Theta Θ notation might be more factually appropriate in a given context. Big O notation_sentence_116

For example, when considering a function T(n) = 73n + 22n + 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below). Big O notation_sentence_117

Big O notation_ordered_list_3

  1. T(n) = O(n)Big O notation_item_3_6
  2. T(n) = O(n)Big O notation_item_3_7
  3. T(n) = Θ(n)Big O notation_item_3_8

The equivalent English statements are respectively: Big O notation_sentence_118

Big O notation_ordered_list_4

  1. T(n) grows asymptotically no faster than nBig O notation_item_4_9
  2. T(n) grows asymptotically no faster than nBig O notation_item_4_10
  3. T(n) grows asymptotically as fast as n.Big O notation_item_4_11

So while all three statements are true, progressively more information is contained in each. Big O notation_sentence_119

In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). Big O notation_sentence_120

For example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound. Big O notation_sentence_121

Other notation Big O notation_section_25

In their book Introduction to Algorithms, Cormen, Leiserson, Rivest and Stein consider the set of functions f which satisfy Big O notation_sentence_122

In a correct notation this set can, for instance, be called O(g), where Big O notation_sentence_123

The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages. Big O notation_sentence_124

Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the set O(g), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example: Big O notation_sentence_125

Extensions to the Bachmann–Landau notations Big O notation_section_26

Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) log g(n)) for some k. Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). Big O notation_sentence_126

This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log n is always o(n) for any constant k and any ε > 0). Big O notation_sentence_127

Also the L notation, defined as Big O notation_sentence_128

Generalizations and related usages Big O notation_section_27

The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space. Big O notation_sentence_129

A generalization to functions g taking values in any topological group is also possible. Big O notation_sentence_130

The "limiting process" x → xo can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g. The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions, Big O notation_sentence_131

which is an equivalence relation and a more restrictive notion than the relationship "f is Θ(g)" from above. Big O notation_sentence_132

(It reduces to lim f / g = 1 if f and g are positive real valued functions.) Big O notation_sentence_133

For example, 2x is Θ(x), but 2x − x is not o(x). Big O notation_sentence_134

History (Bachmann–Landau, Hardy, and Vinogradov notations) Big O notation_section_28

Landau never used the big Theta and small omega symbols. Big O notation_sentence_135

Hardy's symbols were (in terms of the modern O notation) Big O notation_sentence_136

and frequently both notations are used in the same paper. Big O notation_sentence_137

The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Big O notation_sentence_138

Neither Bachmann nor Landau ever call it "Omicron". Big O notation_sentence_139

The symbol was much later on (1976) viewed by Knuth as a capital omicron, probably in reference to his definition of the symbol Omega. Big O notation_sentence_140

The digit zero should not be used. Big O notation_sentence_141

See also Big O notation_section_29

Big O notation_unordered_list_5

References and notes Big O notation_section_30

Credits to the contents of this page go to the authors of the corresponding Wikipedia page: en.wikipedia.org/wiki/Big O notation.