# GetWiki

*Euclidean algorithm*

ARTICLE SUBJECTS

being →

database →

ethics →

fiction →

history →

internet →

language →

linux →

logic →

method →

news →

policy →

purpose →

religion →

science →

software →

truth →

unix →

wiki →

ARTICLE TYPES

essay →

feed →

help →

system →

wiki →

ARTICLE ORIGINS

critical →

forked →

imported →

original →

Euclidean algorithm

[ temporary import ]

**please note:**

- the content below is remote from Wikipedia

- it has been imported raw for GetWiki

**Euclidean algorithm**,Some widely used textbooks, such as I. N. Herstein's

*Topics in Algebra*and Serge Lang's

*Algebra*, use the term "Euclidean algorithm" to refer to Euclidean division or

**Euclid's algorithm**, is an efficient method for computing the greatest common divisor (GCD) of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his

*Elements*(c. 300 BC).It is an example of an

*algorithm*, a step-by-step procedure for performing a calculation according to well-defined rules,and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 Ã— 12 and 105 = 21 Ã— 5), and the same number 21 is also the GCD of 105 and 252 âˆ’ 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g., {{nowrap begin}}21 = 5 Ã— 105 + (âˆ’2) Ã— 252.{{nowrap end}} The fact that the GCD can always be expressed in this way is known as BÃ©zout's identity.The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel LamÃ© in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.

## Background: greatest common divisor

The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers*a*and

*b*. The greatest common divisor

*g*is the largest natural number that divides both

*a*and

*b*without leaving a remainder. Synonyms for the GCD include the

*greatest common factor*(GCF), the

*highest common factor*(HCF), the

*highest common divisor*(HCD), and the

*greatest common measure*(GCM). The greatest common divisor is often written as gcd(

*a*,

*b*) or, more simply, as (

*a*,

*b*),{{Harvnb|Stark|1978|p=16}} although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.If gcd(

*a*,

*b*) = 1, then

*a*and

*b*are said to be coprime (or relatively prime).{{Harvnb|Stark|1978|p=21}} This property does not imply that

*a*or

*b*are themselves prime numbers.{{Harvnb|LeVeque|1996|p=32}} For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 Ã— 3 and 35 = 5 Ã— 7. Nevertheless, 6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common.(File:24x60.svg|thumb|upright|alt="Tall, slender rectangle divided into a grid of squares. The rectangle is two squares wide and five squares tall."|A 24-by-60 rectangle is covered with ten 12-by-12 square tiles, where 12 is the GCD of 24 and 60. More generally, an

*a*-by-

*b*rectangle can be covered with square tiles of side-length

*c*only if

*c*is a common divisor of

*a*and

*b*.)Let

*g*= gcd(

*a*,

*b*). Since

*a*and

*b*are both multiples of

*g*, they can be written

*a*=

*mg*and

*b*=

*ng*, and there is no larger number

*G*>

*g*for which this is true. The natural numbers

*m*and

*n*must be coprime, since any common factor could be factored out of

*m*and

*n*to make

*g*greater. Thus, any other number

*c*that divides both

*a*and

*b*must also divide

*g*. The greatest common divisor

*g*of

*a*and

*b*is the unique (positive) common divisor of

*a*and

*b*that is divisible by any other common divisor

*c*.{{Harvnb|LeVeque|1996|p=31}}The GCD can be visualized as follows.BOOK, Grossman, J. W., 1990, Discrete Mathematics, Macmillan, New York, 0-02-348331-8, 213, Consider a rectangular area

*a*by

*b*, and any common divisor

*c*that divides both

*a*and

*b*exactly. The sides of the rectangle can be divided into segments of length

*c*, which divides the rectangle into a grid of squares of side length

*c*. The greatest common divisor

*g*is the largest value of

*c*for which this is possible. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).The GCD of two numbers

*a*and

*b*is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both

*a*and

*b*.{{Harvnb|Schroeder|2005|pp=21â€“22}} For example, since 1386 can be factored into 2 Ã— 3 Ã— 3 Ã— 7 Ã— 11, and 3213 can be factored into 3 Ã— 3 Ã— 3 Ã— 7 Ã— 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 Ã— 3 Ã— 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors.{{Harvnb|Schroeder|2005|p=19}}BOOK, Ogilvy, C. S., C. Stanley Ogilvy, Anderson, J. T., 1966, Excursions in number theory, Oxford University Press, New York, 27â€“29, Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.{{Harvnb|Schroeder|2005|pp=216â€“219}}Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor

*g*of two nonzero numbers

*a*and

*b*is also their smallest positive integral linear combination, that is, the smallest positive number of the form

*ua*+

*vb*where

*u*and

*v*are integers. The set of all integral linear combinations of

*a*and

*b*is actually the same as the set of all multiples of

*g*(

*mg*, where

*m*is an integer). In modern mathematical language, the ideal generated by

*a*and

*b*is the ideal generated by

*g*alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of

*a*and

*b*also divides the GCD (it divides both terms of

*ua*+

*vb*). The equivalence of this GCD definition with the other definitions is described below.The GCD of three or more numbers equals the product of the prime factors common to all the numbers,{{Harvnb|Stark|1978|p=25}} but it can also be calculated by repeatedly taking the GCDs of pairs of numbers.{{Harvnb|Ore|1948|pp=47â€“48}} For example,

{{math|1=gcd(

Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.*a*,*b*,*c*) = gcd(*a*, gcd(*b*,*c*)) = gcd(gcd(*a*,*b*),*c*) = gcd(gcd(*a*,*c*),*b*).}}## Description

### Procedure

The Euclidean algorithm proceeds in a series of steps such that the output of each step is used as an input for the next one. Let*k*be an integer that counts the steps of the algorithm, starting with zero. Thus, the initial step corresponds to

*k*= 0, the next step corresponds to

*k*= 1, and so on.Each step begins with two nonnegative remainders

*r*

**'k****−1 and**

*r**'k*−2. Since the algorithm ensures that the remainders decrease steadily with every step,

*r*

**'k****−1 is less than its predecessor**

*r**'k*−2. The goal of the

*k*th step is to find a quotient

*q*

**'k****and remainder**

*r**'k*that satisfy the equation

r_{k-2} = q_k r_{k-1} + r_k

and that have *r*

*k*b

a := a âˆ’ b;

b := b âˆ’ a;

The variables **else**b := b âˆ’ a;

**return**a;*a*and

*b*alternate holding the previous remainders

*r*

**'k****âˆ’1 and**

*r**'k*âˆ’2. Assume that

*a*is larger than

*b*at the beginning of an iteration; then

*a*equals

*r*

**'k****âˆ’2, since**

*r**'k*âˆ’2 >

*r*

**'k****âˆ’1. During the loop iteration,**

*a*is reduced by multiples of the previous remainder*b*until*a*is smaller than*b*. Then*a*is the next remainder*r**'k*. Then

*b*is reduced by multiples of

*a*until it is again smaller than

*a*, giving the next remainder

*r*

*k*+1, and so on.The recursive version{{Harvnb|Stillwell|1997|p=14}} is based on the equality of the GCDs of successive remainders and the stopping condition gcd(

*r*

**'N****âˆ’1, 0) =**

*r**'N*âˆ’1.

**function**gcd(a, b)

**if**b = 0

**return**a;

**else**

**return**gcd(b, a

**mod**b);

### Method of least absolute remainders

In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder.{{Harvnb|Ore|1948|p=43}}BOOK, Stewart, B. M., 1964, Theory of Numbers, 2nd, Macmillan, New York, 43â€“44, 64010964, Previously, the equation
{{math|1=

assumed that {{math|1={{!}}*r***'k****âˆ’2 =***q**'k**r***'k****âˆ’1 +***r**'k*}}*r*

**'k****âˆ’1{{!}} >**

*r**'k*> 0}}. However, an alternative negative remainder {{math|1=

*e*

*k*}} can be computed:

{{math|1=

if {{math|1=*r***'k****âˆ’2 = (***q**'k*+ 1)*r***'k****âˆ’1 +***e**'k*}}*r*

*k*âˆ’1 > 0}} or

{{math|1=

if {{math|1=*r***'k****âˆ’2 = (***q**'k*− 1)*r***'k****âˆ’1 +***e**'k*}}*r*

*k*âˆ’1

**M**represent the product of all the quotient matrices

*g*as a linear sum of

*a*and

*b*, both sides of this equation can be multiplied by the inverse of the matrix

**M**.BOOK, Bach, E., Eric Bach, Shallit, J., Jeffrey Shallit, 1996, Algorithmic number theory, MIT Press, Cambridge, MA, 0-262-02405-5, 70â€“73, The determinant of

**M**equals (âˆ’1)

*N*+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of

**M**is never zero, the vector of the final remainders can be solved using the inverse of

**M**

{{math|1=

the two integers of BÃ©zout's identity are *g*= (âˆ’1)*N*+1 (*m*22*a*âˆ’*m*12*b*),}}*s*= (âˆ’1)

*N*+1

*m*22 and

*t*= (âˆ’1)

*N*

*m*12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.

### Euclid's lemma and unique factorization

BÃ©zout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors.{{Harvnb|Stark|1978|pp=26â€“36}} To illustrate this, suppose that a number*L*can be written as a product of two factors

*u*and

*v*, that is,

*L*=

*uv*. If another number

*w*also divides

*L*but is coprime with

*u*, then

*w*must divide

*v*, by the following argument: If the greatest common divisor of

*u*and

*w*is 1, then integers

*s*and

*t*can be found such that

{{math|1=1 =

by BÃ©zout's identity. Multiplying both sides by *su*+*tw*.}}*v*gives the relation

{{math|1=

Since *v*=*suv*+*twv*=*sL*+*twv*.}}*w*divides both terms on the right-hand side, it must also divide the left-hand side,

*v*. This result is known as Euclid's lemma.{{Harvnb|Ore|1948|p=44}} Specifically, if a prime number divides

*L*, then it must divide at least one factor of

*L*. Conversely, if a number

*w*is coprime to each of a series of numbers

*a*1,

*a*2, ...,

*a*

**'n****, then**

*w*is also coprime to their product,*a*1 Ã—*a*2 Ã— ... Ã—*a**'n*.Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers.{{Harvnb|Stark|1978|pp=281â€“292}} To see this, assume the contrary, that there are two independent factorizations of

*L*into

*m*and

*n*prime factors, respectively

{{math|1=

Since each prime *L*=*p*1*p*2â€¦*p***'m****=***q*1*q*2â€¦*q**'n*.}}*p*divides

*L*by assumption, it must also divide one of the

*q*factors; since each

*q*is prime as well, it must be that

*p*=

*q*. Iteratively dividing by the

*p*factors shows that each

*p*has an equal counterpart

*q*; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.

### Linear Diophantine equations

File:Diophante Bezout.svg|thumb|alt="A diagonal line running from the upper left corner to the lower right. Fifteen circles are spaced at regular intervals along the line. Perpendicular x-y coordinate axes have their origin in the lower left corner; the line crossed the y-axis at the upper left and crosse the x-axis at the lower right."|Plot of a linear Diophantine equationDiophantine equationDiophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus.{{Harvnb|Rosen|2000|pp=119â€“125}} A typical*linear*Diophantine equation seeks integers

*x*and

*y*such that{{Harvnb|Schroeder|2005|pp=106â€“107}}

{{math|1=

where *ax*+*by*=*c*}}*a*,

*b*and

*c*are given integers. This can be written as an equation for

*x*in modular arithmetic:

{{math|1=

Let *ax*≡*c*mod*b*.}}*g*be the greatest common divisor of

*a*and

*b*. Both terms in

*ax*+

*by*are divisible by

*g*; therefore,

*c*must also be divisible by

*g*, or the equation has no solutions. By dividing both sides by

*c*/

*g*, the equation can be reduced to Bezout's identity

{{math|1=

where *sa*+*tb*=*g*}}*s*and

*t*can be found by the extended Euclidean algorithm.{{Harvnb|Schroeder|2005|pp=108â€“109}} This provides one solution to the Diophantine equation,

*x*1 =

*s*(

*c*/

*g*) and

*y*1 =

*t*(

*c*/

*g*).In general, a linear Diophantine equation has no solutions, or an infinite number of solutions.{{Harvnb|Rosen|2000|pp=120â€“121}} To find the latter, consider two solutions, (

*x*1,

*y*1) and (

*x*2,

*y*2), where

{{math|1=

or equivalently
*ax*1 +*by*1 =*c*=*ax*2 +*by*2}}
{{math|1=

Therefore, the smallest difference between two *a*(*x*1 âˆ’*x*2) =*b*(*y*2 âˆ’*y*1).}}*x*solutions is

*b*/

*g*, whereas the smallest difference between two

*y*solutions is

*a*/

*g*. Thus, the solutions may be expressed as

{{math|1=

By allowing *x*=*x*1 âˆ’*bu*/*g*}} {{math|1=*y*=*y*1 +*au*/*g*}}.*u*to vary over all possible integers, an infinite family of solutions can be generated from a single solution (

*x*1,

*y*1). If the solutions are required to be

*positive*integers (

*x*> 0,

*y*> 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions;{{Harvnb|Stark|1978|p=47}} this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).

### Multiplicative inverses and the RSA algorithm

A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0â€“12. For example, the result of 5 Ã— 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime*p*; using more sophisticated definitions, they can also be defined for any power

*m*of a prime

*p*

*m*. Finite fields are often called Galois fields, and are abbreviated as GF(

*p*) or GF(

*p*

*m*).In such a field with

*m*numbers, every nonzero element

*a*has a unique modular multiplicative inverse,

*a*âˆ’1 such that {{nowrap|1=

*aa*âˆ’1 =

*a*âˆ’1

*a*â‰¡ 1 mod

*m*.}} This inverse can be found by solving the congruence equation

*ax*â‰¡ 1 mod

*m*,{{Harvnb|Schroeder|2005|pp=107â€“109}} or the equivalent linear Diophantine equation{{Harvnb|Stillwell|1997|pp=186â€“187}}

{{math|1=

This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message.{{Harvnb|Schroeder|2005|p=134}} Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekampâ€“Massey algorithm for decoding BCH and Reedâ€“Solomon codes, which are based on Galois fields.BOOK, Error Correction Coding: Mathematical Methods and Algorithms, 266, T. K., Moon, John Wiley and Sons, 2005, 0-471-64800-0, *ax*+*my*= 1.}}### Chinese remainder theorem

Euclid's algorithm can also be used to solve multiple linear Diophantine equations.{{Harvnb|Rosen|2000|pp=143â€“170}} Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer*x*. Instead of representing an integer by its digits, it may be represented by its remainders

*x*

**'i****modulo a set of**

*N*coprime numbers*m**'i*:{{Harvnb|Schroeder|2005|pp=194â€“195}}

*x*from its

*N*remainders

*x*

**'i****. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus**

*M*that is the product of all the individual moduli*m**'i*, and define

*M*

*i*as

M_i = frac M {m_i}.

Thus, each *M*

**'i****is the product of all the moduli**

*except**m**'i*. The solution depends on finding

*N*new numbers

*h*

*i*such that

M_i h_i equiv 1 pmod {m_i} ,.

With these numbers *h*

**'i****, any integer**

*x*can be reconstructed from its remainders*x**'i*by the equation

x equiv (x_1 M_1 h_1 + x_2 M_2 h_2 + cdots + x_N M_N h_N) pmod M ,.

Since these numbers *h*

**'i****are the multiplicative inverses of the**

*M**'i*, they may be found using Euclid's algorithm as described in the previous subsection.

### Sternâ€“Brocot tree

The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Sternâ€“Brocot tree.The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number*a*/

*b*can be found by computing gcd(

*a*,

*b*) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether

*a*/

*b*is given in lowest terms, and forms a path from the root to a node containing the number

*a*/

*b*.BOOK, Concrete mathematics, 123, Ronald Graham, Graham, R., Donald Knuth, Knuth, D. E., Oren Patashnik, Patashnik, O., Addison-Wesley, 1989, This fact can be used to prove that each positive rational number appears exactly once in this tree.For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice:thumb|400px|The Sternâ€“Brocot tree, and the Sternâ€“Brocot sequences of order

*i*for

*i*= 1, 2, 3, 4

& gcd(3,4) & leftarrow

# {} & gcd(3,1) & rightarrow

# {} & gcd(2,1) & rightarrow

# {} & gcd(1,1).

end{align}The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkinâ€“Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.### Continued fractions

The Euclidean algorithm has a close relationship with continued fractions.BOOK, Ivan Matveyevich Vinogradov, Vinogradov, I. M., 1954, Elements of Number Theory, Dover, New York, 3â€“13, The sequence of equations can be written in the form
frac a b = q_0 + cfrac 1 {q_1 + cfrac{r_1}{r_0}} ,.

The third equation may be used to substitute the denominator term *r*1/

*r*0, yielding

frac a b = q_0 + cfrac 1 {q_1 + cfrac 1 {q_2 + cfrac{r_2}{r_1}}},.

The final ratio of remainders *r*

**'k****/**

*r**'k*âˆ’1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction

frac a b = q_0 + cfrac 1 {q_1 + cfrac 1 {q_2 + cfrac{1}{ddots + cfrac 1 {q_N}}}} = [ q_0; q_1, q_2, ldots , q_N ] ,.

In the worked example above, the gcd(1071, 462) was calculated, and the quotients *q*

*k*were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written

frac{1071}{462} = 2 + cfrac 1 {3 + cfrac 1 7} = [2; 3, 7]

as can be confirmed by calculation.### Factorization algorithms

Calculating a greatest common divisor is an essential step in several integer factorization algorithms,{{harvnb|Crandall|Pomerance|2001}}, pp. 225â€“349 such as Pollard's rho algorithm,{{harvnb|Knuth|1997}}, pp. 369â€“371 Shor's algorithm,JOURNAL, Peter Shor, Shor, P. W., 1997, Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, SIAM Journal on Scientific and Statistical Computing, 26, 1484, 10.1137/s0097539795293172, quant-ph/9508027, Dixon's factorization methodJOURNAL, Dixon, J. D., 1981, Asymptotically fast factorization of integers, Math. Comput., 36, 255â€“260, 10.2307/2007743, 2007743, 153, and the Lenstra elliptic curve factorization.JOURNAL, Hendrik Lenstra, Lenstra, H. W., Jr., 1987, Factoring integers with elliptic curves, Annals of Mathematics, 126, 649â€“673, 10.2307/1971363, 1971363, 3, The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.{{harvnb|Knuth|1997}}, pp. 380â€“384## Algorithmic efficiency

Image:Euclidean Algorithm Running Time.svg|thumb|alt="A set of colored lines radiating outwards from the origin of an*x*-

*y*coordinate system. Each line corresponds to a set of number pairs requiring the same number of steps in the Euclidean algorithm."|Number of steps in the Euclidean algorithm for gcd(

*x*,

*y*). Lighter (red and yellow) points indicate relatively few steps, whereas darker (violet and blue) points indicate more steps. The largest dark area follows the line

*y*=

*Î¦x*, where

*Î¦*is the golden ratiogolden ratioThe computational efficiency of Euclid's algorithm has been studied thoroughly.{{harvnb|Knuth|1997}}, pp. 339â€“364 This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811,BOOK, Reynaud, A.-A.-L., 1811, TraitÃ© d'arithmÃ©tique Ã l'usage des Ã©lÃ¨ves qui se destinent Ã l'Ã‰cole Polytechnique, 6th, Courcier, Paris, Note 60, p. 34, As cited by {{harvtxt|Shallit|1994}}. who showed that the number of division steps on input (

*u*,

*v*) is bounded by

*v*; later he improved this to

*v*/2 + 2. Later, in 1841, P. J. E. Finck showedBOOK, Finck, P.-J.-E., TraitÃ© Ã©lÃ©mentaire d'arithmÃ©tique Ã l'usage des candidats aux Ã©coles spÃ©ciales, fr, Derivaux, 1841, that the number of division steps is at most 2 log2

*v*+ 1, and hence Euclid's algorithm runs in time polynomial in the size of the input.JOURNAL, Jeffrey Shallit, Shallit, J., 1994, Origins of the analysis of the Euclidean algorithm, Historia Math., 21, 401â€“419, 10.1006/hmat.1994.1031, harv, Ã‰mile LÃ©ger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel LamÃ© in 1844,JOURNAL, Gabriel LamÃ©, LamÃ©, G., 1844, Note sur la limite du nombre des divisions dans la recherche du plus grand commun diviseur entre deux nombres entiers, fr, Comptes Rendus Acad. Sci., 19, 867â€“870, who showed that the number of steps required for completion is never more than five times the number

*h*of base-10 digits of the smaller number

*b*.JOURNAL, Grossman, H., 1924, On the Number of Divisions in Finding a G.C.D, The American Mathematical Monthly, 31, 443, 10.2307/2298146, 2298146, 9, BOOK, Honsberger, R., Ross Honsberger, 1976, Mathematical Gems II, The Mathematical Association of America, 0-88385-302-7, 54â€“57, In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and LamÃ©'s analysis implies that the total running time is also

*O*(

*h*). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as

*O*(

*h*2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also

*O*(

*h*2). Modern algorithmic techniques based on the SchÃ¶nhageâ€“Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.

### Number of steps

The number of steps to calculate the GCD of two natural numbers,*a*and

*b*, may be denoted by

*T*(

*a*,

*b*).{{harvnb|Knuth|1997}}, p. 344 If

*g*is the GCD of

*a*and

*b*, then

*a*=

*mg*and

*b*=

*ng*for two coprime numbers

*m*and

*n*. Then

{{math|1=

as may be seen by dividing all the steps in the Euclidean algorithm by *T*(*a*,*b*) =*T*(*m*,*n*)}}*g*.{{Harvnb|Ore|1948|p=45}} By the same argument, the number of steps remains the same if

*a*and

*b*are multiplied by a common factor

*w*:

*T*(

*a*,

*b*) =

*T*(

*wa*,

*wb*). Therefore, the number of steps

*T*may vary dramatically between neighboring pairs of numbers, such as T(

*a*,

*b*) and T(

*a*,

*b*+ 1), depending on the size of the two GCDs.The recursive nature of the Euclidean algorithm gives another equation

{{math|1=

where *T*(*a*,*b*) = 1 +*T*(*b*,*r*0) = 2 +*T*(*r*0,*r*1) = â€¦ =*N*+*T*(*r***'N****âˆ’2,***r**'N*âˆ’1) =*N*+ 1}}*T*(

*x*, 0) = 0 by assumption.

#### Worst-case

If the Euclidean algorithm requires*N*steps for a pair of natural numbers

*a*>

*b*> 0, the smallest values of

*a*and

*b*for which this is true are the Fibonacci numbers

*F*

**'N****+2 and**

*F**'N*+1, respectively.{{harvnb|Knuth|1997}}, p. 343 More precisely, if the Euclidean algorithm requires

*N*steps for the pair

*a*>

*b*, then one has

*a*â‰¥

*F*

**'N****+2 and**

*b*â‰¥*F**'N*+1. This can be shown by induction.{{Harvnb|Mollin|2008|p=21}} If

*N*= 1,

*b*divides

*a*with no remainder; the smallest natural numbers for which this is true is

*b*= 1 and

*a*= 2, which are

*F*2 and

*F*3, respectively. Now assume that the result holds for all values of

*N*up to

*M*âˆ’ 1. The first step of the

*M*-step algorithm is

*a*=

*q*0

*b*+

*r*0, and the Euclidean algorithm requires

*M*âˆ’ 1 steps for the pair

*b*>

*r*0. By induction hypothesis, one has

*b*â‰¥

*F*

**'M****+1 and**

*r*0 â‰¥*F**'M*. Therefore,

*a*=

*q*0

*b*+

*r*0 â‰¥

*b*+

*r*0 â‰¥

*F*

**'M****+1 +**

*F**'M*=

*F*

*M*+2,which is the desired inequality.This proof, published by Gabriel LamÃ© in 1844, represents the beginning of computational complexity theory,{{Harvnb|LeVeque|1996|p=35}} and also the first practical application of the Fibonacci numbers.This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10).{{Harvnb|Mollin|2008|pp=21â€“22}} For if the algorithm requires

*N*steps, then

*b*is greater than or equal to

*F*

**'N****+1 which in turn is greater than or equal to**

*Ï†**'N*−1, where

*Ï†*is the golden ratio. Since

*b*â‰¥

*Ï†*

**'N****−1, then**

*N*− 1 â‰¤ log*Ï†**'b*. Since log10

*Ï†*> 1/5, (

*N*− 1)/5

T(a) = frac 1 a sum_{0 leq b
tau(a) = frac 1 {varphi(a)} sum_{begin{smallmatrix} 0 leq b
T(a) = frac 1 a sum_{d mid a} varphi(d) tau(d)

it can be approximated by the formulaJOURNAL, Norton, G. H., 1990, On the Asymptotic Analysis of the Euclidean Algorithm, Journal of Symbolic Computation, 10, 53â€“58, 10.1016/S0747-7171(08)80036-3,
T(a) approx C + frac{12}{pi^2} ln 2 left(ln a - sum_{d mid a} frac{Lambda(d)} dright)

where Î›(*d*) is the Mangoldt function.{{harvnb|Knuth|1997}}, p. 355A third average

*Y*(

*n*) is defined as the mean number of steps required when both

*a*and

*b*are chosen randomly (with uniform distribution) from 1 to

*n*

Y(n) = frac 1 {n^2} sum_{a=1}^n sum_{b=1}^n T(a, b) = frac 1 n sum_{a=1}^n T(a).

Substituting the approximate formula for *T*(

*a*) into this equation yields an estimate for

*Y*(

*n*){{harvnb|Knuth|1997}}, p. 356

Y(n) approx frac{12}{pi^2} ln 2 ln n + 0.06.

### Computational expense per step

In each step*k*of the Euclidean algorithm, the quotient

*q*

**'k****and remainder**

*r**'k*are computed for a given pair of integers

*r*

**'k****âˆ’2 and**

*r**'k*âˆ’1

{{math|1=

The computational expense per step is associated chiefly with finding *r***'k****âˆ’2 =***q**'k**r***'k****âˆ’1 +***r**'k*.}}*q*

**'k****, since the remainder**

*r**'k*can be calculated quickly from

*r*

**'k****âˆ’2,**

*r**'k*âˆ’1, and

*q*

*k*

{{math|1=

The computational expense of dividing *r***'k****=***r**'k*âˆ’2 âˆ’*q***'k***r**'k*âˆ’1.}}*h*-bit numbers scales as

*O*(

*h*(

*â„“*+1)), where

*â„“*is the length of the quotient.{{harvnb|Knuth|1997}}, pp. 257â€“261For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient

*q*number of subtractions. If the ratio of

*a*and

*b*is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient

*q*is approximately ln|

*u*/(

*u*âˆ’ 1)| where

*u*= (

*q*+ 1)2.{{harvnb|Knuth|1997}}, p. 352 For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers,BOOK, Wagon, S., Stan Wagon, 1999, Mathematica in Action, Springer-Verlag, New York, 0-387-98252-3, 335â€“336, the subtraction-based Euclid's algorithm is competitive with the division-based version.{{Harvnb|Cohen|1993|p=14}} This is exploited in the binary version of Euclid's algorithm.{{Harvnb|Cohen|1993|pp=14â€“15, 17â€“18}}Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (

*h*2) with the average number of digits

*h*in the initial two numbers

*a*and

*b*. Let

*h*0,

*h*1, ...,

*h*

**'N****âˆ’1 represent the number of digits in the successive remainders**

*r*0,*r*1, ...,*r**'N*âˆ’1. Since the number of steps

*N*grows linearly with

*h*, the running time is bounded by

**- content above as imported from Wikipedia**

- "

- time: 4:27am EDT - Sun, Sep 22 2019

- "

__Euclidean algorithm__" does not exist on GetWiki (yet)- time: 4:27am EDT - Sun, Sep 22 2019

[ this remote article is provided by Wikipedia ]

LATEST EDITS [ see all ]

GETWIKI 09 JUL 2019

GETWIKI 09 MAY 2016

GETWIKI 18 OCT 2015

GETWIKI 20 AUG 2014

GETWIKI 19 AUG 2014

© 2019 M.R.M. PARROTT | ALL RIGHTS RESERVED