SUPPORT THE WORK

GetWiki

Euclidean algorithm

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
Euclidean algorithm
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{About|an algorithm for the greatest common divisor|the mathematics of space|Euclidean geometry|other uses of "Euclidean"|Euclidean (disambiguation)}}{{short description|Algorithm for computing greatest common divisors}}File:Euclid's algorithm Book VII Proposition 2 3.png|300px|thumb|right|Euclid's method for finding the greatest common divisor (GCD) of two starting lengths BA and DC, both defined to be multiples of a common "unit" length. The length DC being shorter, it is used to "measure" BA, but only once because remainder EA is less than DC. EA now measures (twice) the shorter length DC, with remainder FC shorter than EA. Then FC measures (three times) length EA. Because there is no remainder, the process ends with FC being the GCD. On the right (Nicomachus]]'s example with numbers 49 and 21 resulting in their GCD of 7 (derived from Heath 1908:300).)In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's Topics in Algebra and Serge Lang's Algebra, use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC).It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules,and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 Ã— 12 and 105 = 21 Ã— 5), and the same number 21 is also the GCD of 105 and 252 âˆ’ 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g., {{nowrap begin}}21 = 5 × 105 + (−2) × 252.{{nowrap end}} The fact that the GCD can always be expressed in this way is known as Bézout's identity.The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.

Background: greatest common divisor

The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for the GCD include the greatest common factor (GCF), the highest common factor (HCF), the highest common divisor (HCD), and the greatest common measure (GCM). The greatest common divisor is often written as gcd(ab) or, more simply, as (ab),{{Harvnb|Stark|1978|p=16}} although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.If gcd(ab) = 1, then a and b are said to be coprime (or relatively prime).{{Harvnb|Stark|1978|p=21}} This property does not imply that a or b are themselves prime numbers.{{Harvnb|LeVeque|1996|p=32}} For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 Ã— 3 and 35 = 5 Ã— 7. Nevertheless, 6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common.(File:24x60.svg|thumb|upright|alt="Tall, slender rectangle divided into a grid of squares. The rectangle is two squares wide and five squares tall."|A 24-by-60 rectangle is covered with ten 12-by-12 square tiles, where 12 is the GCD of 24 and 60. More generally, an a-by-b rectangle can be covered with square tiles of side-length c only if c is a common divisor of a and b.)Let g = gcd(ab). Since a and b are both multiples of g, they can be written a = mg and b = ng, and there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c.{{Harvnb|LeVeque|1996|p=31}}The GCD can be visualized as follows.BOOK, Grossman, J. W., 1990, Discrete Mathematics, Macmillan, New York, 0-02-348331-8, 213, Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The greatest common divisor g is the largest value of c for which this is possible. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).The GCD of two numbers a and b is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both a and b.{{Harvnb|Schroeder|2005|pp=21–22}} For example, since 1386 can be factored into 2 Ã— 3 Ã— 3 Ã— 7 Ã— 11, and 3213 can be factored into 3 Ã— 3 Ã— 3 Ã— 7 Ã— 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 Ã— 3 Ã— 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors.{{Harvnb|Schroeder|2005|p=19}}BOOK, Ogilvy, C. S., C. Stanley Ogilvy, Anderson, J. T., 1966, Excursions in number theory, Oxford University Press, New York, 27–29, Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.{{Harvnb|Schroeder|2005|pp=216–219}}Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g  of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form ua + vb where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of ua + vb). The equivalence of this GCD definition with the other definitions is described below.The GCD of three or more numbers equals the product of the prime factors common to all the numbers,{{Harvnb|Stark|1978|p=25}} but it can also be calculated by repeatedly taking the GCDs of pairs of numbers.{{Harvnb|Ore|1948|pp=47–48}} For example,
{{math|1=gcd(abc) = gcd(a, gcd(bc)) = gcd(gcd(ab), c) = gcd(gcd(ac), b).}}
Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.

Description

Procedure

The Euclidean algorithm proceeds in a series of steps such that the output of each step is used as an input for the next one. Let k be an integer that counts the steps of the algorithm, starting with zero. Thus, the initial step corresponds to k = 0, the next step corresponds to k = 1, and so on.Each step begins with two nonnegative remainders r'k−1 and r'k−2. Since the algorithm ensures that the remainders decrease steadily with every step, r'k−1 is less than its predecessor r'k−2. The goal of the kth step is to find a quotient q'k and remainder r'k that satisfy the equation
r_{k-2} = q_k r_{k-1} + r_k
and that have rk  b
a := a − b;
else
b := b − a;
return a;
The variables a and b alternate holding the previous remainders r'k−1 and r'k−2. Assume that a is larger than b at the beginning of an iteration; then a equals r'k−2, since r'k−2 > r'k−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder r'k. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on.The recursive version{{Harvnb|Stillwell|1997|p=14}} is based on the equality of the GCDs of successive remainders and the stopping condition gcd(r'N−1, 0) = r'N−1.
function gcd(a, b)
if b = 0
return a;
else
return gcd(b, a mod b);
For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.

Method of least absolute remainders

In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder.{{Harvnb|Ore|1948|p=43}}BOOK, Stewart, B. M., 1964, Theory of Numbers, 2nd, Macmillan, New York, 43–44, 64010964, Previously, the equation
{{math|1=r'k−2 = q'k r'k−1 + r'k}}
assumed that {{math|1={{!}}r'k−1{{!}} > r'k > 0}}. However, an alternative negative remainder {{math|1=ek}} can be computed:
{{math|1=r'k−2 = (q'k + 1) r'k−1 + e'k}}
if {{math|1=rk−1 > 0}} or
{{math|1=r'k−2 = (q'k − 1) r'k−1 + e'k}}
if {{math|1=rk−1 
begin{pmatrix} a b end{pmatrix} =begin{pmatrix} q_0 & 1 1 & 0 end{pmatrix} begin{pmatrix} b r_0 end{pmatrix} =begin{pmatrix} q_0 & 1 1 & 0 end{pmatrix} begin{pmatrix} q_1 & 1 1 & 0 end{pmatrix} begin{pmatrix} r_0 r_1 end{pmatrix} =cdots =prod_{i=0}^N begin{pmatrix} q_i & 1 1 & 0 end{pmatrix} begin{pmatrix} r_{N-1} 0 end{pmatrix} ,.Let M represent the product of all the quotient matrices
mathbf{M} = begin{pmatrix} m_{11} & m_{12} m_{21} & m_{22} end{pmatrix} =prod_{i=0}^N begin{pmatrix} q_i & 1 1 & 0 end{pmatrix} =begin{pmatrix} q_0 & 1 1 & 0 end{pmatrix} begin{pmatrix} q_1 & 1 1 & 0 end{pmatrix} cdots begin{pmatrix} q_{N} & 1 1 & 0 end{pmatrix} ,.This simplifies the Euclidean algorithm to the form
begin{pmatrix} a b end{pmatrix} =mathbf{M} begin{pmatrix} r_{N-1} 0 end{pmatrix} =mathbf{M} begin{pmatrix} g 0 end{pmatrix} ,.To express g as a linear sum of a and b, both sides of this equation can be multiplied by the inverse of the matrix M.BOOK, Bach, E., Eric Bach, Shallit, J., Jeffrey Shallit, 1996, Algorithmic number theory, MIT Press, Cambridge, MA, 0-262-02405-5, 70–73, The determinant of M equals (−1)N+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M
begin{pmatrix} g 0 end{pmatrix} =mathbf{M}^{-1} begin{pmatrix} a b end{pmatrix} =(-1)^{N+1} begin{pmatrix} m_{22} & -m_{12} -m_{21} & m_{11} end{pmatrix} begin{pmatrix} a b end{pmatrix} ,.Since the top equation gives
{{math|1=g = (−1)N+1 ( m22 a − m12 b),}}
the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.

Euclid's lemma and unique factorization

Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors.{{Harvnb|Stark|1978|pp=26–36}} To illustrate this, suppose that a number L can be written as a product of two factors u and v, that is, L = uv. If another number w also divides L but is coprime with u, then w must divide v, by the following argument: If the greatest common divisor of u and w is 1, then integers s and t can be found such that
{{math|1=1 = su + tw .}}
by Bézout's identity. Multiplying both sides by v gives the relation
{{math|1=v = suv + twv = sL + twv .}}
Since w divides both terms on the right-hand side, it must also divide the left-hand side, v. This result is known as Euclid's lemma.{{Harvnb|Ore|1948|p=44}} Specifically, if a prime number divides L, then it must divide at least one factor of L. Conversely, if a number w is coprime to each of a series of numbers a1, a2, ..., a'n, then w is also coprime to their product, a1 Ã— a2 Ã— ... Ã— a'n.Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers.{{Harvnb|Stark|1978|pp=281–292}} To see this, assume the contrary, that there are two independent factorizations of L into m and n prime factors, respectively
{{math|1=L = p1p2…p'm = q1q2…q'n .}}
Since each prime p divides L by assumption, it must also divide one of the q factors; since each q is prime as well, it must be that p = q. Iteratively dividing by the p factors shows that each p has an equal counterpart q; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.

Linear Diophantine equations

File:Diophante Bezout.svg|thumb|alt="A diagonal line running from the upper left corner to the lower right. Fifteen circles are spaced at regular intervals along the line. Perpendicular x-y coordinate axes have their origin in the lower left corner; the line crossed the y-axis at the upper left and crosse the x-axis at the lower right."|Plot of a linear Diophantine equationDiophantine equationDiophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus.{{Harvnb|Rosen|2000|pp=119–125}} A typical linear Diophantine equation seeks integers x and y such that{{Harvnb|Schroeder|2005|pp=106–107}}
{{math|1=ax + by = c}}
where a, b and c are given integers. This can be written as an equation for x in modular arithmetic:
{{math|1=axc mod b.}}
Let g be the greatest common divisor of a and b. Both terms in ax + by are divisible by g; therefore, c must also be divisible by g, or the equation has no solutions. By dividing both sides by c/g, the equation can be reduced to Bezout's identity
{{math|1=sa + tb = g}}
where s and t can be found by the extended Euclidean algorithm.{{Harvnb|Schroeder|2005|pp=108–109}} This provides one solution to the Diophantine equation, x1 = s (c/g) and y1 = t (c/g).In general, a linear Diophantine equation has no solutions, or an infinite number of solutions.{{Harvnb|Rosen|2000|pp=120–121}} To find the latter, consider two solutions, (x1, y1) and (x2, y2), where
{{math|1=ax1 + by1 = c = ax2 + by2}}
or equivalently
{{math|1=a(x1 − x2) = b(y2 − y1).}}
Therefore, the smallest difference between two x solutions is b/g, whereas the smallest difference between two y solutions is a/g. Thus, the solutions may be expressed as
{{math|1=x = x1 − bu/g}} {{math|1=y = y1 + au/g}}.
By allowing u to vary over all possible integers, an infinite family of solutions can be generated from a single solution (x1, y1). If the solutions are required to be positive integers (x > 0, y > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions;{{Harvnb|Stark|1978|p=47}} this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).

Multiplicative inverses and the RSA algorithm

A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 Ã— 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime p; using more sophisticated definitions, they can also be defined for any power m of a prime p m. Finite fields are often called Galois fields, and are abbreviated as GF(p) or GF(p m).In such a field with m numbers, every nonzero element a has a unique modular multiplicative inverse, a−1 such that {{nowrap|1=aa−1 = a−1a â‰¡ 1 mod m.}} This inverse can be found by solving the congruence equation ax â‰¡ 1 mod m,{{Harvnb|Schroeder|2005|pp=107–109}} or the equivalent linear Diophantine equation{{Harvnb|Stillwell|1997|pp=186–187}}
{{math|1=ax + my = 1.}}
This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message.{{Harvnb|Schroeder|2005|p=134}} Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields.BOOK, Error Correction Coding: Mathematical Methods and Algorithms, 266, T. K., Moon, John Wiley and Sons, 2005, 0-471-64800-0,

Chinese remainder theorem

Euclid's algorithm can also be used to solve multiple linear Diophantine equations.{{Harvnb|Rosen|2000|pp=143–170}} Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders x'i modulo a set of N coprime numbers m'i:{{Harvnb|Schroeder|2005|pp=194–195}}
begin{align}x_1 & equiv x pmod {m_1} x_2 & equiv x pmod {m_2} & ,,,vdots x_N & equiv x pmod {m_N} ,.end{align}The goal is to determine x from its N remainders x'i. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli m'i, and define Mi as
M_i = frac M {m_i}.
Thus, each M'i is the product of all the moduli except m'i. The solution depends on finding N new numbers hi such that
M_i h_i equiv 1 pmod {m_i} ,.
With these numbers h'i, any integer x can be reconstructed from its remainders x'i by the equation
x equiv (x_1 M_1 h_1 + x_2 M_2 h_2 + cdots + x_N M_N h_N) pmod M ,.
Since these numbers h'i are the multiplicative inverses of the M'i, they may be found using Euclid's algorithm as described in the previous subsection.

Stern–Brocot tree

The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree.The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b.BOOK, Concrete mathematics, 123, Ronald Graham, Graham, R., Donald Knuth, Knuth, D. E., Oren Patashnik, Patashnik, O., Addison-Wesley, 1989, This fact can be used to prove that each positive rational number appears exactly once in this tree.For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice:thumb|400px|The Stern–Brocot tree, and the Stern–Brocot sequences of order i for i = 1, 2, 3, 4
begin{align}
& gcd(3,4) & leftarrow

{} & gcd(3,1) & rightarrow

{} & gcd(2,1) & rightarrow

{} & gcd(1,1).

end{align}The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.

Continued fractions

The Euclidean algorithm has a close relationship with continued fractions.BOOK, Ivan Matveyevich Vinogradov, Vinogradov, I. M., 1954, Elements of Number Theory, Dover, New York, 3–13, The sequence of equations can be written in the form
begin{align}frac a b &= q_0 + frac{r_0} b frac b {r_0} &= q_1 + frac{r_1}{r_0} frac{r_0}{r_1} &= q_2 + frac{r_2}{r_1} & ,,, vdots frac{r_{k-2}}{r_{k-1}} &= q_k + frac{r_k}{r_{k-1}} & ,,, vdots frac{r_{N-2}}{r_{N-1}} &= q_N,.end{align}The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form
frac a b = q_0 + cfrac 1 {q_1 + cfrac{r_1}{r_0}} ,.
The third equation may be used to substitute the denominator term r1/r0, yielding
frac a b = q_0 + cfrac 1 {q_1 + cfrac 1 {q_2 + cfrac{r_2}{r_1}}},.
The final ratio of remainders r'k/r'k−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction
frac a b = q_0 + cfrac 1 {q_1 + cfrac 1 {q_2 + cfrac{1}{ddots + cfrac 1 {q_N}}}} = [ q_0; q_1, q_2, ldots , q_N ] ,.
In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written
frac{1071}{462} = 2 + cfrac 1 {3 + cfrac 1 7} = [2; 3, 7]
as can be confirmed by calculation.

Factorization algorithms

Calculating a greatest common divisor is an essential step in several integer factorization algorithms,{{harvnb|Crandall|Pomerance|2001}}, pp. 225–349 such as Pollard's rho algorithm,{{harvnb|Knuth|1997}}, pp. 369–371 Shor's algorithm,JOURNAL, Peter Shor, Shor, P. W., 1997, Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, SIAM Journal on Scientific and Statistical Computing, 26, 1484, 10.1137/s0097539795293172, quant-ph/9508027, Dixon's factorization methodJOURNAL, Dixon, J. D., 1981, Asymptotically fast factorization of integers, Math. Comput., 36, 255–260, 10.2307/2007743, 2007743, 153, and the Lenstra elliptic curve factorization.JOURNAL, Hendrik Lenstra, Lenstra, H. W., Jr., 1987, Factoring integers with elliptic curves, Annals of Mathematics, 126, 649–673, 10.2307/1971363, 1971363, 3, The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.{{harvnb|Knuth|1997}}, pp. 380–384

Algorithmic efficiency

Image:Euclidean Algorithm Running Time.svg|thumb|alt="A set of colored lines radiating outwards from the origin of an x-y coordinate system. Each line corresponds to a set of number pairs requiring the same number of steps in the Euclidean algorithm."|Number of steps in the Euclidean algorithm for gcd(x,y). Lighter (red and yellow) points indicate relatively few steps, whereas darker (violet and blue) points indicate more steps. The largest dark area follows the line y = Φx, where Φ is the golden ratiogolden ratioThe computational efficiency of Euclid's algorithm has been studied thoroughly.{{harvnb|Knuth|1997}}, pp. 339–364 This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811,BOOK, Reynaud, A.-A.-L., 1811, Traité d'arithmétique à l'usage des élèves qui se destinent à l'École Polytechnique, 6th, Courcier, Paris, Note 60, p. 34, As cited by {{harvtxt|Shallit|1994}}. who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2  + 2. Later, in 1841, P. J. E. Finck showedBOOK, Finck, P.-J.-E., Traité élémentaire d'arithmétique à l'usage des candidats aux écoles spéciales, fr, Derivaux, 1841, that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input.JOURNAL, Jeffrey Shallit, Shallit, J., 1994, Origins of the analysis of the Euclidean algorithm, Historia Math., 21, 401–419, 10.1006/hmat.1994.1031, harv, Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844,JOURNAL, Gabriel Lamé, Lamé, G., 1844, Note sur la limite du nombre des divisions dans la recherche du plus grand commun diviseur entre deux nombres entiers, fr, Comptes Rendus Acad. Sci., 19, 867–870, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b.JOURNAL, Grossman, H., 1924, On the Number of Divisions in Finding a G.C.D, The American Mathematical Monthly, 31, 443, 10.2307/2298146, 2298146, 9, BOOK, Honsberger, R., Ross Honsberger, 1976, Mathematical Gems II, The Mathematical Association of America, 0-88385-302-7, 54–57, In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.

Number of steps

The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(ab).{{harvnb|Knuth|1997}}, p. 344 If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then
{{math|1=T(a, b) = T(m, n)}}
as may be seen by dividing all the steps in the Euclidean algorithm by g.{{Harvnb|Ore|1948|p=45}} By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(ab + 1), depending on the size of the two GCDs.The recursive nature of the Euclidean algorithm gives another equation
{{math|1=T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(r'N−2, r'N−1) = N + 1}}
where T(x, 0) = 0 by assumption.

Worst-case

If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers F'N+2 and F'N+1, respectively.{{harvnb|Knuth|1997}}, p. 343 More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a â‰¥ F'N+2 and b â‰¥ F'N+1. This can be shown by induction.{{Harvnb|Mollin|2008|p=21}} If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M âˆ’ 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M âˆ’ 1 steps for the pair b > r0. By induction hypothesis, one has b â‰¥ F'M+1 and r0 â‰¥ F'M. Therefore, a = q0b + r0 â‰¥ b + r0 â‰¥ F'M+1 + F'M = FM+2,which is the desired inequality.This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory,{{Harvnb|LeVeque|1996|p=35}} and also the first practical application of the Fibonacci numbers.This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10).{{Harvnb|Mollin|2008|pp=21–22}} For if the algorithm requires N steps, then b is greater than or equal to F'N+1 which in turn is greater than or equal to φ'N−1, where φ is the golden ratio. Since b â‰¥ Ï†'N−1, then N − 1 â‰¤ logφ'b. Since log10φ > 1/5, (N − 1)/5 
T(a) = frac 1 a sum_{0 leq b tau(a) = frac 1 {varphi(a)} sum_{begin{smallmatrix} 0 leq b T(a) = frac 1 a sum_{d mid a} varphi(d) tau(d)
it can be approximated by the formulaJOURNAL, Norton, G. H., 1990, On the Asymptotic Analysis of the Euclidean Algorithm, Journal of Symbolic Computation, 10, 53–58, 10.1016/S0747-7171(08)80036-3,
T(a) approx C + frac{12}{pi^2} ln 2 left(ln a - sum_{d mid a} frac{Lambda(d)} dright)
where Λ(d) is the Mangoldt function.{{harvnb|Knuth|1997}}, p. 355A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n
Y(n) = frac 1 {n^2} sum_{a=1}^n sum_{b=1}^n T(a, b) = frac 1 n sum_{a=1}^n T(a).
Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n){{harvnb|Knuth|1997}}, p. 356
Y(n) approx frac{12}{pi^2} ln 2 ln n + 0.06.

Computational expense per step

In each step k of the Euclidean algorithm, the quotient q'k and remainder r'k are computed for a given pair of integers r'k−2 and r'k−1
{{math|1=r'k−2 = q'k r'k−1 + r'k.}}
The computational expense per step is associated chiefly with finding q'k, since the remainder r'k can be calculated quickly from r'k−2, r'k−1, and qk
{{math|1=r'k = r'k−2 − q'k r'k−1.}}
The computational expense of dividing h-bit numbers scales as O(h(â„“+1)), where â„“ is the length of the quotient.{{harvnb|Knuth|1997}}, pp. 257–261For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln|u/(u âˆ’ 1)| where u = (q + 1)2.{{harvnb|Knuth|1997}}, p. 352 For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers,BOOK, Wagon, S., Stan Wagon, 1999, Mathematica in Action, Springer-Verlag, New York, 0-387-98252-3, 335–336, the subtraction-based Euclid's algorithm is competitive with the division-based version.{{Harvnb|Cohen|1993|p=14}} This is exploited in the binary version of Euclid's algorithm.{{Harvnb|Cohen|1993|pp=14–15, 17–18}}Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., h'N−1 represent the number of digits in the successive remainders r0, r1, ..., r'N−1. Since the number of steps N grows linearly with h, the running time is bounded by
OBig(sum_{i

- content above as imported from Wikipedia
- "Euclidean algorithm" does not exist on GetWiki (yet)
- time: 4:27am EDT - Sun, Sep 22 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
GETWIKI 19 AUG 2014
CONNECT