For less elementary aspects of the subject, see Polynomial ring.
In mathematics, a polynomial is an expression consisting of variables (also called indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables.
An example of a polynomial of a single indeterminate x is x − 4x + 7.
An example in three variables is x + 2xyz − yz + 1.
Polynomials appear in many areas of mathematics and science.
For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated scientific problems; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; they are used in calculus and numerical analysis to approximate other functions.
The word polynomial joins two diverse roots: the Greek poly, meaning "many", and the Latin nomen, or name.
It was derived from the term binomial by replacing the Latin root bi- with the Greek poly-.
The word polynomial was first used in the 17th century.
Notation and terminology
The x occurring in a polynomial is commonly called a variable or an indeterminate.
When the polynomial is considered as an expression, x is a fixed symbol which does not have any value (its value is "indeterminate").
However, when one considers the function defined by the polynomial, then x represents the argument of the function, and is therefore called a "variable".
Many authors use these two words interchangeably.
It is common to use uppercase letters for indeterminates and corresponding lowercase letters for the variables (or arguments) of the associated function.
A polynomial P in the indeterminate x is commonly denoted either as P or as P(x).
Formally, the name of the polynomial is P, not P(x), but the use of the functional notation P(x) dates from a time when the distinction between a polynomial and the associated function was unclear.
Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate.
For example, "let P(x) be a polynomial" is a shorthand for "let P be a polynomial in the indeterminate x".
On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial.
The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials.
If a denotes a number, a variable, another polynomial, or, more generally, any expression, then P(a) denotes, by convention, the result of substituting a for x in P. Thus, the polynomial P defines the function
which is the polynomial function associated to P. Frequently, when using this notation, one supposes that a is a number.
However, one may use it over any domain where addition and multiplication are defined (that is, any ring).
In particular, if a is a polynomial then P(a) is also a polynomial.
More specifically, when a is the indeterminate x, then the image of x by this function is the polynomial P itself (substituting x for x does not change anything).
In other words,
which justifies formally the existence of two notations for the same polynomial.
Two such expressions that may be transformed, one to the other, by applying the usual properties of commutativity, associativity and distributivity of addition and multiplication, are considered as defining the same polynomial.
A polynomial in a single indeterminate x can always be written (or rewritten) in the form
This can be expressed more concisely by using summation notation:
That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms.
Each term consists of the product of a number – called the coefficient of the term – and a finite number of indeterminates, raised to nonnegative integer powers.
Further information: Degree of a polynomial
The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient.
Because x = x, the degree of an indeterminate without a written exponent is one.
A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a constant term and a constant polynomial.
The degree of a constant term and of a nonzero constant polynomial is 0.
The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below).
is a term.
The coefficient is −5, the indeterminates are x and y, the degree of x is two, while the degree of y is one.
The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is 2 + 1 = 3.
Forming a sum of several terms produces a polynomial.
For example, the following is a polynomial:
It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
Polynomials of small degree have been given specific names.
A polynomial of degree zero is a constant polynomial, or simply a constant.
Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials.
For higher degrees, the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used.
The names for the degrees may be applied to the polynomial or to its terms.
For example, the term 2x in x + 2x + 1 is a linear term in a quadratic polynomial.
The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial.
Unlike other constant polynomials, its degree is not zero.
Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞).
The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of roots.
The graph of the zero polynomial, f(x) = 0, is the x-axis.
In the case of polynomials in more than one indeterminate, a polynomial is called homogeneous of degree n if all of its non-zero terms have degree n. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined.
For example, xy + 7xy − 3x is homogeneous of degree 5.
For more details, see Homogeneous polynomial.
The commutative law of addition can be used to rearrange terms into any preferred order.
In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x".
The polynomial in the example above is written in descending powers of x.
The first term has coefficient 3, indeterminate x, and exponent 2.
In the second term, the coefficient is −5.
The third term is a constant.
Because the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.
Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined.
It may happen that this makes the coefficient 0.
Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and a three-term polynomial is called a trinomial.
The term "quadrinomial" is occasionally used for a four-term polynomial.
A real polynomial is a polynomial with real coefficients.
However, a real polynomial function is a function from the reals to the reals that is defined by a real polynomial.
A polynomial in one indeterminate is called a univariate polynomial, a polynomial in more than one indeterminate is called a multivariate polynomial.
A polynomial with two indeterminates is called a bivariate polynomial.
These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all.
It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of indeterminates allowed.
Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on.
It is also common to say simply "polynomials in x, y, and z", listing the indeterminates allowed.
The evaluation of a polynomial consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions.
For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using Horner's method:
Addition and subtraction
Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the commutative law) and combining of like terms.
For example, if
then the sum
can be reordered and regrouped as
and then simplified to
When polynomials are added together, the result is another polynomial.
Subtraction of polynomials is similar.
Polynomials can also be multiplied.
To expand the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other.
For example, if
Carrying out the multiplication in each term produces
Combining similar terms yields
which can be simplified to
As in the example, the product of polynomials is always a polynomial.
The division of one polynomial by another is not typically a polynomial.
For example, the fraction 1/(x + 1) is not a polynomial, and it cannot be written as a finite sum of powers of the variable x.
This notion of the division a(x)/b(x) results in two polynomials, a quotient q(x) and a remainder r(x), such that a = b q + r and degree(r) < degree(b).
When the denominator b(x) is monic and linear, that is, b(x) = x − c for some constant c, then the polynomial remainder theorem asserts that the remainder of the division of a(x) by b(x) is the evaluation f(c).
In this case, the quotient may be computed by Ruffini's rule, a special case of synthetic division.
All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant.
This factored form is unique up to the order of the factors and their multiplication by an invertible constant.
In the case of the field of complex numbers, the irreducible factors are linear.
Over the real numbers, they have the degree either one or two.
Over the integers and the rational numbers the irreducible factors may have any degree.
For example, the factored form of
over the integers and the reals and
over the complex numbers.
The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation.
Main article: Calculus with polynomials
For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers modulo some prime number p, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient kak understood to mean the sum of k copies of ak.
For example, over the integers modulo p, the derivative of the polynomial x + x is the polynomial 1.
See also: Ring of polynomial functions
A polynomial function is a function that can be defined by evaluating a polynomial.
More precisely, a function f of one argument from a given domain is a polynomial function if there exists a polynomial
For example, the function f, defined by
is a polynomial function of one variable.
Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in
A polynomial function in one real variable can be represented by a graph.
- The graph of the zero polynomial
- f(x) = 0
- is the x-axis.
- The graph of a degree 0 polynomial
- f(x) = a0, where a0 ≠ 0,
- is a horizontal line with y-intercept a0
- The graph of a degree 1 polynomial (or linear function)
- f(x) = a0 + a1x , where a1 ≠ 0,
- is an oblique line with y-intercept a0 and slope a1.
- The graph of a degree 2 polynomial
- f(x) = a0 + a1x + a2x, where a2 ≠ 0
- is a parabola.
- The graph of a degree 3 polynomial
- f(x) = a0 + a1x + a2x + a3x, where a3 ≠ 0
- is a cubic curve.
- The graph of any polynomial with degree 2 or greater
- f(x) = a0 + a1x + a2x + ... + anx , where an ≠ 0 and n ≥ 2
- is a continuous non-linear curve.
If the degree is higher than one, the graph does not have any asymptote.
It has two parabolic branches with vertical direction (one branch for positive x and one for negative x).
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
Main article: Algebraic equation
is a polynomial equation.
When considering equations, the indeterminates (variables) of polynomials are also called unknowns, and the solutions are the possible values of the unknowns for which the equality is true (in general more than one solution may exist).
A polynomial equation stands in contrast to a polynomial identity like (x + y)(x − y) = x − y, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality.
For higher degrees, the Abel–Ruffini theorem asserts that there can not exist a general formula in radicals.
This fact is called the fundamental theorem of algebra.
A number a is a root of a polynomial P if and only if the linear polynomial x − a divides P, that is if there is another polynomial Q such that P = (x – a) Q.
It may happen that x − a divides P more than once: if (x − a) divides P then a is called a multiple root of P, and otherwise a is called a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a) divides P, which is called the multiplicity of the root a in P. When P is the zero polynomial, the corresponding polynomial equation is trivial, and this case is usually excluded when considering roots, as, with the above definitions, every number is a root of the zero polynomial, with an undefined multiplicity.
With this exception made, the number of roots of P, even counted with their respective multiplicities, cannot exceed the degree of P. The relation between the coefficients of a polynomial and its roots is described by Vieta's formulas.
Some polynomials, such as x + 1, do not have any roots among the real numbers.
By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving is to compute numerical approximations of the solutions.
There are many methods for that; some are restricted to polynomials and others may apply to any continuous function.
For polynomials in more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called zeros instead of "roots".
The study of the sets of zeros of polynomials is the object of algebraic geometry.
For a set of polynomial equations in several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions.
The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination.
Solving Diophantine equations is generally a very hard task.
Some of the most famous problems that have been solved during the fifty last years are related to Diophantine equations, such as Fermat's Last Theorem.
There are several generalizations of the concept of polynomials.
Main article: Trigonometric polynomial
The coefficients may be taken as real numbers, for real-valued functions.
If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using List of trigonometric identities#Multiple-angle formulae).
Conversely, every polynomial in sin(x) and cos(x) may be converted, with Product-to-sum identities, into a linear combination of functions sin(nx) and cos(nx).
This equivalence explains why linear combinations are called polynomials.
They are used also in the discrete Fourier transform.
Main article: Matrix polynomial
Given an ordinary, scalar-valued polynomial
this polynomial evaluated at a matrix A is
where I is the identity matrix.
A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question.
A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R).
Main article: Laurent polynomial
Laurent polynomials are like polynomials, but allow negative powers of the variable(s) to occur.
Main article: Rational function
While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero.
The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate.
Main article: Formal power series
Formal power series are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree.
Unlike polynomials they cannot in general be explicitly and fully written down (just like irrational numbers cannot), but the rules for manipulating their terms are the same as for polynomials.
Non-formal power series also generalize polynomials, but the multiplication of two power series may not converge.
A bivariate polynomial where the second variable is substituted by an exponential function applied to the first variable, for example P(x, e), may be called an exponential polynomial.
Main article: Polynomial ring
In abstract algebra, one distinguishes between polynomials and polynomial functions.
A polynomial f in one indeterminate x over a ring R is defined as a formal expression of the form
where n is a natural number, the coefficients a0, .
., an are elements of R, and x is a formal symbol, whose powers x are just placeholders for the corresponding coefficients ai, so that the given formal expression is just a way to encode the sequence (a0, a1, .
), where there is an n such that ai = 0 for all i > n. Two polynomials sharing the same value of n are considered equal if and only if the sequences of their coefficients are equal; furthermore any polynomial is equal to any polynomial with greater value of n obtained from it by adding terms in front whose coefficient is zero.
These polynomials can be added by simply adding corresponding coefficients (the rule for extending by terms with zero coefficients can be used to make sure such coefficients exist).
Thus each polynomial is actually equal to the sum of the terms used in its formal expression, if such a term aix is interpreted as a polynomial that has zero coefficients at all powers of x other than x.
Then to define multiplication, it suffices by the distributive law to describe the product of any two such terms, which is given by the rule
Thus the set of all polynomials with coefficients in the ring R forms itself a ring, the ring of polynomials over R, which is denoted by R[x].
The map from R to R[x] sending r to rx is an injective homomorphism of rings, by which R is viewed as a subring of R[x].
One can think of the ring R[x] as arising from R by adding one new element x to R, and extending in a minimal way to a ring in which x satisfies no other relations than the obligatory ones, plus commutation with all elements of R (that is xr = rx).
To do this, one must add all powers of x and their linear combinations as well.
Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones.
For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[x] over the real numbers by factoring out the ideal of multiples of the polynomial x + 1.
If R is commutative, then one can associate with every polynomial P in R[x] a polynomial function f with domain and range equal to R. (More generally, one can take domain and range to be any same unital associative algebra over R.) One obtains the value f(r) by substitution of the value r for the symbol x in P. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see Fermat's little theorem for an example where R is the integers modulo p).
This is not the case when R is the real or complex numbers, whence the two concepts are not always distinguished in analysis.
An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like Euclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for x.
In commutative algebra, one major focus of study is divisibility among polynomials.
If R is an integral domain and f and g are polynomials in R[x], it is said that f divides g or f is a divisor of g if there exists a polynomial q in R[x] such that f q = g. One can show that every zero gives rise to a linear divisor, or more formally, if f is a polynomial in R[x] and r is an element of R such that f(r) = 0, then the polynomial (x − r) divides f. The converse is also true.
The quotient can be computed using the polynomial long division.
If F is a field and f and g are polynomials in F[x] with g ≠ 0, then there exist unique polynomials q and r in F[x] with
and such that the degree of r is smaller than the degree of g (using the convention that the polynomial 0 has a negative degree).
Analogously, prime polynomials (more correctly, irreducible polynomials) can be defined as non-zero polynomials which cannot be factorized into the product of two non-constant polynomials.
In the case of coefficients in a ring, "non-constant" must be replaced by "non-constant or non-unit" (both definitions agree in the case of coefficients in a field).
Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials.
If the coefficients belong to a field or a unique factorization domain this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit).
When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see Factorization of polynomials).
These algorithms are not practicable for hand-written computation, but are available in any computer algebra system.
Eisenstein's criterion can also be used in some cases to determine irreducibility.
Main article: Positional notation
In modern positional numbers systems, such as the decimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the radix or base, in this case, 4 × 10 + 5 × 10.
As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number 1 × 5 + 3 × 5 + 2 × 5 = 42.
This representation is unique.
Let b be a positive integer greater than 1.
Then every positive integer a can be expressed uniquely in the form
where m is a nonnegative integer and the r's are integers such that
- 0 < rm < b and 0 ≤ ri < b for i = 0, 1, . . . , m − 1.
Interpolation and approximation
The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations.
An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone–Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function.
Polynomials are frequently used to encode information about some other object.
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form.
For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input.
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics.
However, the elegant and practical notation we use today only developed beginning in the 15th century.
Before that, equations were written out in words.
For example, an algebra problem from the Chinese Arithmetic in Nine Sections, circa 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou."
We would write 3x + 2y + z = 29.
History of the notation
Main article: History of mathematical notation
The signs + for addition, − for subtraction, and the use of a letter for an unknown appear in Michael Stifel's Arithemetica integra, 1544.
René Descartes, in La géometrie, 1637, introduced the concept of the graph of a polynomial equation.
He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the a's denote constants and x denotes a variable.
Descartes introduced the use of superscripts to denote exponents as well.
- List of polynomial topics
- Polynomial sequence
- Polynomial transformation – Transformation of a polynomial induced by a transformation of its roots
- Polynomial mapping – Function such that the coordinates of the image of a point are polynomial functions of the coordinates of the point
Credits to the contents of this page go to the authors of the corresponding Wikipedia page: en.wikipedia.org/wiki/Polynomial.