fundamental theorem of algebra

aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
essay  →
feed  →
help  →
system  →
wiki  →
critical  →
discussion  →
forked  →
imported  →
original  →
fundamental theorem of algebra
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{short description|Theorem that guarantees the existence of a complex root of a complex polynomial}}{{Distinguish|Fundamental theorem of arithmetic}}The fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number can be considered a complex number with its imaginary part equal to zero.Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed.The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division.In spite of its name, there is no purely algebraic proof of the theorem, since any proof must use some form of the analytic completeness of the real numbers, which is not an algebraic concept.Even the proof that the equation x^2-2=0 has a solution involves the definition of the real numbers through some form of completeness (specifically the intermediate value theorem). Additionally, it is not fundamental for modern algebra; its name was given at a time when algebra was synonymous with theory of equations.


Peter Roth, in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger),Rare books wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation x^4 = 4x-3, although incomplete, has four solutions (counting multiplicities): 1 (twice), -1+isqrt{2}, and -1-isqrt{2}.As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type {{math|x4 + a4}} (with {{math|a}} real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial {{math|x4 − 4x3 + 2x2 + 4x + 4}}, but he got a letter from Euler in 1742See section Le rôle d'Euler in C. Gilain's article Sur l'histoire du théorème fondamental de l'algèbre: théorie des équations et calcul intégral. in which it was shown that this polynomial is equal to
left (x^2-(2+alpha)x+1+sqrt{7}+alpha right ) left (x^2-(2-alpha)x+1+sqrt{7}-alpha right ),
with alpha = sqrt{4+2sqrt{7}}.Also, Euler pointed out that
x^4+a^4= left (x^2+asqrt{2}cdot x+a^2 right ) left (x^2-asqrt{2}cdot x+a^2 right ).
A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem) which would not be proved until more than a century later, and furthermore the proof assumed the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z).At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap.Concerning Wood's proof, see the article A forgotten paper on the fundamental theorem of algebra, by Frank Smithies. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, filled by Alexander Ostrowski in 1920, as discussed in Smale 1981 weblink (Smale writes, "...I wish to point out what an immense gap Gauss' proof contained. It is a subtle point even today that a real algebraic plane curve cannot enter a disk without leaving. In fact even though Gauss redid this proof 50 years later, the gap remained. It was not until 1920 that Gauss' proof was completed. In the reference Gauss, A. Ostrowski has a paper which does this and gives an excellent discussion of the problem as well..."). A rigorous proof was first published by Argand in 1806 (and revisited in 1813); {{MacTutor Biography|id=Argand|title=Jean-Robert Argand}} it was here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another version of his original proof in 1849.The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it.None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, that amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981.Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choiceFor the minimum necessary to prove their equivalence, see Bridges, Schuster, and Richman; 1998; A weak countable choice principle; available from weblink.). However, Fred Richman proved a reformulated version of the theorem that does work.See Fred Richman; 1998; The fundamental theorem of algebra: a constructive development without choice; available from weblink.


All proofs below involve some analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This fact has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra.{{Citation needed|reason=Who made the remark?|date=February 2016}}Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This is enough to establish the theorem in the general case because, given a non-constant polynomial p(z) with complex coefficients, the polynomial
q(z)=p(z)overline{p(overline z)}
has only real coefficients and, if z is a zero of q(z), then either z or its conjugate is a root of p(z).A large number of non-algebraic proofs of the theorem use the fact (sometimes called "growth lemma") that an n-th degree polynomial function p(z) whose dominant coefficient is 1 behaves like zn when |z| is large enough. A more precise statement is: there is some positive real number R such that:
tfrac{1}{2}|z^n| |p(0)| whenever |z| â‰¥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The Maximum modulus principle (applied to 1/p(z)) implies then that p(z0) = 0. In other words, z0 is a zero of p(z).
A variation of this proof does not require the use of the maximum modulus principle (in fact, the same argument with minor changes also gives a proof of the maximum modulus principle for holomorphic functions). If we assume by contradiction that a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0 we can write
p(z) = a + c_k (z-z_0)^k + c_{k+1} (z-z_0)^{k+1} + cdots + c_n (z-z_0)^n.
Here, the cj are simply the coefficients of the polynomial z → p(z + z0), and we let k be the index of the first coefficient following the constant term that is non-zero. But now we see that for z sufficiently close to z0 this has behavior asymptotically similar to the simpler polynomial q(z) = a+c_k (z-z_0)^k, in the sense that (as is easy to check) the function
is bounded by some positive constant M in some neighborhood of z0. Therefore if we define theta_0 = (arg(a)+pi-arg(c_k)) /k and let z = z_0 + r e^{i theta_0}, then for any sufficiently small positive number r (so that the bound M mentioned above holds), using the triangle inequality we see that
&le + r^{k+1} left[4pt]&le left|a +(-1)c_k r^k e^{i(arg(a)-arg(c_k))}right| + M r^{k+1} [4pt]&= |a|-|c_k|r^k + M r^{k+1}end{align}When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, in contradiction to the definition of z0. (Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|.)Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| â‰¤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0.Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number
frac{1}{2pi i}int_{c(r)}frac{p'(z)}{p(z)},dz,
where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2Ï€i is equal to n. But the difference between the two numbers is
frac{1}{2pi i}int_{c(r)}left(frac{p'(z)}{p(z)}-frac{n}{z}right)dz=frac{1}{2pi i}int_{c(r)}frac{zp'(z)-np(z)}{zp(z)},dz.
The numerator of the rational expression being integrated has degree at most n âˆ’ 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N âˆ’ n and so N = n.Still another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue.A proof of the fact that this suffices can be seen here. The proof of the latter statement is by contradiction.Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function
which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that
int_{c(r)} R(z) , dz =0.
On the other hand, R(z) expanded as a geometric series gives:
R(z)=z^{-1}(I_n-z^{-1}A)^{-1}=z^{-1}sum_{k=0}^infty frac{1}{z^k}A^kcdot
This formula is valid outside the closed disc of radius |A| (the operator norm of A). Let r>|A|. Then
int_{c(r)}R(z)dz=sum_{k=0}^{infty}int_{c(r)}frac{dz}{z^{k+1}}A^k=2pi iI_n
(in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue.Finally, Rouché's theorem gives perhaps the shortest proof of the theorem.

Topological proofs

Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z âˆ’ z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck â‰  0 and:
p(z)=p(z_0)+c_k(z-z_0)^k+c_{k+1}(z-z_0)^{k+1}+ cdots +c_n(z-z_0)^n.
If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)|  left | a_{n-1} z^{n-1} + cdots + a_0 right |.When z traverses the circle Re^{itheta} once counter-clockwise (0leq theta leq 2pi), then z^n=Re^{intheta} winds n times counter-clockwise (0leq theta leq 2pi n) around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero.

Algebraic proofs

These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases):
  • every polynomial with odd degree and real coefficients has some real root;
  • every non-negative real number has a square root.
The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R({{radic|−1}}) is algebraically closed.As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that
p(z)=a(z-z_1)(z-z_2) cdots (z-z_n).
If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k âˆ’ 1m′ with m′ odd. For a real number t, define:
q_t(z)=prod_{1le i

- content above as imported from Wikipedia
- "fundamental theorem of algebra" does not exist on GetWiki (yet)
- time: 8:30pm EDT - Tue, Jul 23 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
Eastern Philosophy
History of Philosophy
M.R.M. Parrott