# GetWiki

*linear algebra*

ARTICLE SUBJECTS

being →

database →

ethics →

fiction →

history →

internet →

language →

linux →

logic →

method →

news →

policy →

purpose →

religion →

science →

software →

truth →

unix →

wiki →

ARTICLE TYPES

essay →

feed →

help →

system →

wiki →

ARTICLE ORIGINS

critical →

forked →

imported →

original →

linear algebra

[ temporary import ]

**please note:**

- the content below is remote from Wikipedia

- it has been imported raw for GetWiki

**Linear algebra**is the branch of mathematics concerning linear equations such as

a_1x_1+cdots +a_nx_n=b,

linear functions such as
(x_1, ldots, x_n) mapsto a_1x_1+ldots +a_nx_n,

and their representations through matrices and vector spaces.{{Citation | last = Banerjee | first = Sudipto | last2 = Roy | first2 = Anindya | date = 2014 | title = Linear Algebra and Matrix Analysis for Statistics | series = Texts in Statistical Science | publisher = Chapman and Hall/CRC | edition = 1st | isbn = 978-1420095388}}{{Citation|last=Strang|first=Gilbert|date=July 19, 2005|title=Linear Algebra and Its Applications|publisher=Brooks Cole|edition=4th|isbn=978-0-03-010567-8}}WEB, Weisstein, Eric, Linear Algebra,weblink From MathWorld--A Wolfram Web Resource., Wolfram, 16 April 2012, Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis may be basically viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and engineering areas, because it allows modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems, which cannot be modeled with linear algebra, linear algebra is often used as a first-order approximation.## History

The procedure for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text (Rod calculus#System of linear equations|Chapter Eight:*Rectangular Arrays*) of

*The Nine Chapters on the Mathematical Art*. Its use is illustrated in eighteen problems, with two to five equations. BOOK, Hart, Roger, The Chinese Roots of Linear Algebra, JHU Press, 2010,weblink Systems of linear equations arose in Europe with the introduction in 1637 by RenÃ© Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations.The first systematic methods for solving linear systems used determinants, first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's Rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy.WEB, Vitulli, Marie, Marie A. Vitulli, A Brief History of Linear Algebra and Matrix Theory,weblink Department of Mathematics, University of Oregon,weblink" title="web.archive.org/web/20120910034016weblink">weblink 2012-09-10, 2014-07-08, In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term

*matrix*, which is Latin for

*womb*. Linear algebra grew with ideas noted in the complex plane. For instance, two numbers

*w*and

*z*in â„‚ have a difference

*w*â€“

*z*, and the line segments overline{w z} text{and} overline{0(w-z)} are of the same length and direction. The segments are equipollent. The four-dimensional system â„ of quaternions was started in 1843. The term

*vector*was introduced as

*v*=

*x*i +

*y*j +

*z*k representing a point in space. The quaternion difference

*p*â€“

*q*also produces a segment equipollent to overline{p q} . Other hypercomplex number systems also used the idea of a linear space with a basis.Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".Benjamin Peirce published his

*Linear Associative Algebra*(1872), and his son Charles Sanders Peirce extended the work later.Benjamin Peirce (1872)

*Linear Associative Algebra*, lithograph, new edition with corrections, notes, and an added 1875 paper by Peirce, plus notes by his son Charles Sanders Peirce, published in the

*American Journal of Mathematics*v. 4, 1881, Johns Hopkins University, pp. 221â€“226,

## Vector spaces

Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through*vector spaces*is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract.A vector space over a field {{math|

*F*}} (often the field of the real numbers) is a set {{math|

*V*}} equipped with two binary operations satisfying the following axioms. Elements of {{math|

*V*}} are called

*vectors*, and elements of

*F*are called

*scalars*. The first operation,

*vector addition*, takes any two vectors {{math|

*v*}} and {{math|

*w*}} and outputs a third vector {{math|

*v*+

*w*}}. The second operation,

*scalar multiplication*, takes any scalar {{math|

*a*}} and any vector {{math|

*v*}} and outputs a new {{nowrap|vector {{math|

*av*}}}}. The axioms that addition and scalar multiplication must satisfy are the following (in the list below, {{math|

*u*,

*v*}} and {{math|

*w*}} are arbitrary elements of {{math|

*V*}}, and {{math|

*a*}} and {{math|

*b*}} are arbitrary scalars in the field {{math|

*F*}}.{{Harvard citations|last=Roman|year=2005|nb=yes|loc=ch. 1, p. 27}}{| border="0" style="width:100%;"

Axiom >|Signification |

Associativity of addition > | 1=u + (v + w) = (u + v) + w}} |

Commutativity of addition > | 1=u + v = v + u}} |

Identity element of addition > | 0}} in {{math | V}}, called the zero vector (or simply zero), such that {{math>1=v + 0 = v}} for all {{math | v}} in {{math>V}}. |

Inverse elements of addition > | v}} in {{math | V}}, there exists an element {{math>âˆ’v}} in {{math | V}}, called the additive inverse of {{math>v}}, such that {{math|1=v + (âˆ’v) = 0}} |

Distributivity of scalar multiplication with respect to vector additionâ€ƒâ€ƒ> | 1=a(u + v) = au + av}} |

1=(a + b)v = av + bv}} |

1=a(bv) = (ab)v}} This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplication: bv; and field multiplication: ab. |

1=1v = v}}, where {{math | multiplicative identity of {{mvar>F}}. |

*V*}} is an abelian group under addition.Elements of a vector space may have various nature; for example, they can be sequences, functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces.

### Linear maps

**Linear maps**are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces {{math|

*V*}} and {{math|

*W*}} over a field {{mvar|F}}, a linear map (also called, in some contexts, linear transformation, linear mapping or linear operator) is a map

T:Vto W

that is compatible with addition and scalar multiplication, that is
T(u+v)=T(u)+T(v), quad T(av)=aT(v)

for any vectors {{math|*u*,

*v*}} in {{math|

*V*}} and scalar {{math|

*a*}} in {{mvar|F}}.This implies that for any vectors {{math|

*u*,

*v*}} in {{math|

*V*}} and scalars {{math|

*a*,

*b*}} in {{mvar|F}}, one has

T(au+bv)=T(au)+T(bv)=aT(u)+bT(v)

When a bijective linear map exists between two vector spaces (that is, every vector from the second space is associated with exactly one in the first), the two spaces are isomorphic. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm.### Subspaces, span, and basis

The study of subsets of vector spaces that are themselves vector spaces for the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space {{mvar|V}} over a field {{mvar|F}} is a subset {{mvar|W}} of {{mvar|V}} such that {{math|*u*+

*v*}} and {{math|

*au*}} are in {{mvar|W}}, for every {{mvar|u}}, {{mvar|v}} in {{mvar|W}}, and every {{mvar|a}} in {{mvar|F}}. (These conditions suffices for implying that {{mvar|W}} is a vector space.)For example, the image of a linear map, and the inverse image of 0 by a linear map (called kernel or null space) are linear subspaces.Another important way of forming a subspace is to consider linear combinations of a set {{mvar|S}} of vectors: the set of all sums

a_1 v_1 + a_2 v_2 + cdots + a_k v_k,

where {{math|*v*1,

*v*2, ...,

*vk*}} are in {{mvar|V}}, and {{math|

*a*1,

*a*2, ...,

*a*

*k*}} are in {{mvar|F}} form a linear subspace called the span of {{mvar|S}}. The span of {{mvar|S}} is also the intersection of all linear subspaces containing {{mvar|S}}. In other words, it is the (smallest for the inclusion relation) linear subspace containing {{mvar|S}}.A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set {{mvar|S}} of vector is linearly independent if the only way to express the zero vector as a linear combination of elements of {{mvar|S}} is to take zero for every coefficient a_i.A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set {{mvar|S}} is

*linearly dependent*(that is not linearly independent), then some element {{mvar|w}} of {{mvar|S}} is in the span of the other elements of {{mvar|S}}, and the span would remain the same if one remove {{mvar|w}} from {{mvar|S}}. One may continue to remove elements of {{mvar|S}} until getting a

*linearly independent spanning set*. Such a linearly independent set that spans a vector space {{mvar|V}} is called a basis of {{math|

*V*}}. The importance of bases lies in the fact that there are together minimal generating sets and maximal independent sets. More precisely, if {{math|S}} is a linearly independent set, and {{mvar|T}} is a spanning set such that Ssubseteq T, then there is a basis {{mvar|B}} such that Ssubseteq Bsubseteq T.Any two bases of a vector space {{math|

*V*}} have the same cardinality, which is called the dimension of {{math|

*V*}}; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field {{mvar|F}} are isomorphic if and only if they have the same dimension.Axler (2004), p. 55If any basis of {{math|

*V*}} (and therefore every basis) has a finite number of elements, {{math|

*V*}} is a

*finite-dimensional vector space*. If {{math|

*U*}} is a subspace of {{math|

*V*}}, then {{math|dim

*U*â‰¤ dim

*V*}}. In the case where {{math|

*V*}} is finite-dimensional, the equality of the dimensions implies {{math|1=

*U*=

*V*}}.If

*U*1 and

*U*2 are subspaces of

*V*, then

dim(U_1 + U_2) = dim U_1 + dim U_2 - dim(U_1 cap U_2),

where U_1+U_2denotes the span of U_1cup U_2.Axler (2204), p. 33## Matrices

Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra.Let {{mvar|V}} be a finite-dimensional vector space over a field {{math|*F*}}, and {{math|(

*v*1,

*v*2, ...,

*vm*)}} be a basis of {{math|

*V*}} (thus {{mvar|m}} is the dimension of {{math|

*V*}}). By definition of a basis, the map

begin{align}

(a_1, ldots, a_m)&mapsto a_1v_1+cdots a_mv_mF^m &to Vend{align}is a bijection from F^m, the set of the sequences of {{mvar|m}} elements of {{mvar|F}}, onto {{mvar|V}}. This is an isomorphism of vector spaces, if F^m is equipped of its standard structure of vector space, where vector addition and scalar multiplication are done component by component.This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinates vector (a_1, ldots, a_m) or by the column matrix
begin{bmatrix}a_1vdotsa_mend{bmatrix}.

If {{mvar|W}} is another finite dimensional vector space (possibly the same), with a basis (w_1,ldots,w_n), a linear map {{mvar|f}} from {{mvar|W}} to {{mvar|V}} is well defined by its values on the basis elements, that is (f(w_1),ldots,f(w_n)). Thus, {{mvar|f}} is well represented by the list of the corresponding column matrices. That is, if
f(w_j)=a_{1,j}v_1 + cdots+a_{m,j}v_m,

for {{math|1=*j*= 1, ...,

*n*}}, then {{mvar|f}} is represented by the matrix

begin{bmatrix}a_{1,1}&ldots&a_{1,n}

vdots&ldots&vdotsa_{m,1}&ldots&a_{m,n}end{bmatrix},with {{mvar|m}} rows and {{mvar|n}} columns.Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing exactly the same concepts.Two matrices that encode the same linear transformation in different bases are called similar. Equivalently, two matrices are similar if one can transform one in the other by elementary row and column operations. For a matrix representing a linear map from {{mvar|W}} to {{mvar|V}}, the row operations correspond to change of bases in {{mvar|V}} and the column operations correspond to change of bases in {{mvar|W}}. Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector space, this means that, for any linear map from {{mvar|W}} to {{mvar|V}}, there are bases such that a part of the basis of {{mvar|W}} is mapped bijectively on a part of the basis of {{mvar|V}}, and that the remaining basis elements of {{mvar|W}}, if any, are mapped to zero (this is a way of expressing the fundamental theorem of linear algebra). Gaussian elimination is the basic algorithm for finding these elementary operations, and proving this theorem.## Linear systems

Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory has been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems.For example, let
begin{alignat}{7}

2x &&; + ;&& y &&; - ;&& z &&; = ;&& 8 -3x &&; - ;&& y &&; + ;&& 2z &&; = ;&& -11 -2x &&; + ;&& y &&; +;&& 2z &&; = ;&& -3 end{alignat}qquad text{(S)}be a linear system.To such a system, one may associate its matrix
Mleft[begin{array}{rrr}

2 & 1 & -1-3 & -1 & 2 -2 & 1 & 2end{array}right]text{.}and its right member vector
v=begin{bmatrix}

8-11-3end{bmatrix}.Let {{mvar|T}} be the linear transformation associated to the matrix {{mvar|M}}. A solution of the system {{math|(S)}} is a vector
X=begin{bmatrix}

xyzend{bmatrix} such that
T(X)=v,

that is an element of the preimage of {{mvar|v}} by {{mvar|T}}.Let {{math|(S')}} be the associated homogeneous system, where the right-hand sides of the equations are put to zero. The solutions of {{math|(S')}} are exactly the elements of the kernel of {{math|T}} or, equivalently, {{mvar|M}}.The Gaussian-elimination consists of performing elementary row operations on the augmented matrix
Mleft[begin{array}{rrr|r}

2 & 1 & -1&8-3 & -1 & 2&-11 -2 & 1 & 2&-3end{array}right]for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is
Mleft[begin{array}{rrr|r}

1 & 0 & 0&2end{array}right],showing that the system {{math|(S)}} has the unique solution
begin{align}x&=2y&=3z&=-1.end{align}

It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses.## Endomorphisms and square matrices

A linear endomorphism is a linear map that maps a vector space {{mvar|V}} to itself. If {{mvar|V}} has a basis of {{mvar|n}} elements, such an endomorphism is represented by a square matrix of size {{mvar|n}}.With respect to general linear maps, linear endomorphisms and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other part of mathematics.### Determinant

The*determinant*of a square matrix is a polynomial function of the entries of the matrix, such that the matrix is invertible if and only if the determinant is not zero. This results from the fact that the determinant of a product of matrices is the product of the determinants, and thus that a matrix is invertible if and only if its determinant is invertible.Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of {{mvar|n}} linear equations in {{mvar|n}} unknowns. Cramer's rule is useful for reasoning about the solution, but, except for {{math|1=

*n*= 2}} or {{math|3}}, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm.The

*determinant of an endomorphism*is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense, since this determinant is independent of the choice of the basis.

### Eigenvalues and eigenvectors

If {{mvar|f}} is a linear endomorphism of a vector space {{mvar|V}} over a field {{mvar|F}}, an**eigenvector**of {{mvar|f}} is a nonzero vector {{mvar|v}} of {{mvar|V}} such that {{math|1=

*f*(

*v*) =

*av*}} for some scalar {{mvar|a}} in {{mvar|F}}. This scalar {{mvar|a}} is an

**eigenvalue**of {{mvar|f}}.If the dimension of {{mvar|V}} is finite, and a basis has been chosen, {{mvar|f}} and {{mvar|v}} may be represented, respectively, by a square matrix {{mvar|M}} and a column matrix and {{mvar|z}}; the equation defining eigenvectors and eigenvalues becomes

Mz=az.

Using the identity matrix {{mvar|I}}, whose all entries are zero, except those of the main diagonal, which are equal to one, this may be rewritten
(M-aI)z=0.

As {{mvar|z}} is supposed to be nonzero, this means that {{math|*M*â€“

*aI*}} is a singular matrix, and thus that its determinant det(M-aI) equals zero. The eigenvalues are thus the roots of the polynomial

det(xI-M).

If {{mvar|V}} is of dimension {{mvar|n}}, this is a monic polynomial of degree {{mvar|n}}, called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, {{mvar|n}} eigenvalues.If a basis exists that consists only of eigenvectors, the matrix of {{mvar|f}} on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable.A symmetric matrix is always diagonalizable. There are non-diagonizable matrices, the simplest being
begin{bmatrix}0&1&0end{bmatrix}

(it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero).When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need of extending the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extend the field of scalar for containing all eigenvalues, and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1.## Duality

A linear form is a linear map from a vector space {{mvar|V}} over a field {{mvar|F}} to the field of scalars {{mvar|F}}, viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the**dual space**of {{mvar|V}}, and usually denoted V^*.If v_1, ldots, v_n is a basis of {{mvar|V}} (this implies that {{mvar|V}} is finite-dimensional), then one can define, for {{math|1=

*i*= 1, ...,

*n*}}, a linear map v_i^* such that v_i^*(e_i)=1 and v_i^*(e_j)=0 if {{math|

*j*â‰

*i*}}. These linear maps form a basis of V^*, called the dual basis of v_1, ldots, v_n. (If {{mvar|V}} is not finite-dimensional, the v^*_i may be defined similarly; they are linearly independent, but do not form a basis.)For {{mvar|v}} in {{mvar|V}}, the map

fto f(v)

is a linear form on V^*. This defines the canonical linear map from {{mvar|V}} into V^{**},, the dual of V^*, called the **bidual**of {{mvar|V}}. This canonical map is an isomorphism if {{mvar|V}} is finite-dimensional, and this allows identifying {{mvar|V}} with its bidual. (In the infinite dimensional case, the canonical map is injective, but not surjective.)There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the braâ€“ket notation

langle f,xrangle

for denoting {{math|*f*{{space|hair}}(

*x*)}}.

### Dual map

Let
f:Vto W

be a linear map. For every linear form {{mvar|h}} on {{mvar|W}}, the composite function {{math|*f*âˆ˜

*h*}} is a linear form on {{mvar|V}}. This defines a linear map

f^*:W^*to V^*

between the dual spaces, which is called the **dual**or the

**transpose**of {{mvar|f}}.If {{mvar|V}} and {{mvar|W}} are finite dimensional, and {{mvar|M}} is the matrix of {{mvar|f}} in terms of some ordered bases, then the matrix of f^* over the dual bases is the transpose M^mathsf T of {{mvar|M}}, obtained by exchanging rows and columns.If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in braâ€“ket notation by

langle h^mathsf T , Mvrangle = langle h^mathsf T M,vrangle.

For highlighting this symmetry, the two members of this equality are sometimes written
langle h^mathsf T mid Mmid vrangle.

### Inner-product spaces

{{cleanup|section|reason=Need for a more encyclopedic style, which is homogeneous with the style of preceding sections. Also, some details do not belong to this general article but to more specialized ones. Also, inner product spaces should appear as a special instance of the more general concept of bilinear form. Finally, complex conjugation should appear in a specific section on linear algebra over the complexes.|date=August 2018}}Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an*inner product*is a map

langle cdot, cdot rangle : V times V rightarrow F

that satisfies the following three axioms for all vectors *u*,

*v*,

*w*in

*V*and all scalars

*a*in

*F*:BOOK, Functional analysis, P. K. Jain, Khalil Ahmad,weblink 203, 5.1 Definitions and basic properties of inner product spaces and Hilbert spaces, 81-224-0801-X, 1995, 2nd, New Age International, BOOK, Quantum mechanics in Hilbert space, Eduard PrugovecÌ†ki,weblink Definition 2.1, 18

*ff*, 0-12-566060-X, 1981, Academic Press, 2nd,

- Conjugate symmetry:

langle u,vrangle =overline{langle v,urangle}.

**R**, it is symmetric.

- Linearity in the first argument:

langle au,vrangle= a langle u,vrangle.
langle u+v,wrangle= langle u,wrangle+ langle v,wrangle.

langle v,vrangle geq 0 with equality only for

*v*= 0.*v*in

*V*by

|v|^2=langle v,vrangle,

and we can prove the Cauchyâ€“Schwarz inequality:
|langle u,vrangle| leq |u| cdot |v|.

In particular, the quantity
frac{|langle u,vrangle|}{|u| cdot |v|} leq 1,

and so we can call this quantity the cosine of the angle between the two vectors.Two vectors are orthogonal if langle u, vrangle =0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gramâ€“Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if *v*=

*a*1

*v*1 + ... +

*an vn*, then a_i = langle v,v_i rangle.The inner product facilitates the construction of many useful concepts. For instance, given a transform

*T*, we can define its Hermitian conjugate

*T**as the linear transform satisfying

langle T u, v rangle = langle u, T^* vrangle.

If *T*satisfies

*TT**=

*T*T*, we call

*T*normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span

*V*.

## Relationship with geometry

There is a strong relationship between linear algebra and geometry, which started with the introduction by RenÃ© Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts solving systems of linear equations. This was one of the main motivations for developing linear algebra.Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified and studied in terms of linear maps. This is also the case of homographies and MÃ¶bius transformations, when considered as transformations of a projective space. Until the end of 19th century, geometric spaces were defined by axioms relating points, lines and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space) It has been shown that the two approaches are essentially equivalent.Emil Artin (1957)*Geometric Algebra*Interscience Publishers In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields. Presently, most textbooks, introduce geometric spaces from linear algebra, and geometry is often presented, at elementary level, as a subfield of linear algebra.

## Usage and applications{{anchor|Applications}}

Linear algebra is used in almost all areas of mathematics, and therefore in almost all scientific domains that use mathematics. These applications may be divided into several wide categories.### Geometry of our ambient space

The modeling of our ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains.In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra.### Functional analysis

Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions).### Study of complex systems

Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.This may have the consequence that some physically interesting solutions are omitted. In both cases, very large matrices are generally involved. Weather forecasting is a typical example, where the whole Earth atmosphere is divided in cells of, say, 100 km of width and 100 m of height.### Scientific computation

Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, for adapting them to the specificities of the computer (cache size, number of available cores, ...).Some processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra.## Extensions and generalizations

This section presents several related topics that do not appear generally in elementary textbooks on linear algebra, but are commonly considered, in advanced mathematics, as parts of linear algebra.### Module theory

The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring {{mvar|R}}, and this gives a structure called**module**over {{mvar|R}}, or {{mvar|R}}-module.The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if {{mvar|R}} is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring.Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules.Modules over the integers can be identified with abelian groups, since the multiplication by an integer may identified to a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring.There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than the similar algorithms over a field. For more details, see Linear equation over a ring.

### Multilinear algebra and tensors

{{cleanup|section|reason=The dual space is considered above, and the section must be rewritten for given a understandable summary of this subject|date=September 2018}}In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space*V*âˆ— consisting of linear maps {{nowrap|

*f*:

*V*â†’

*F*}} where

*F*is the field of scalars. Multilinear maps {{nowrap|

*T*:

*Vn*â†’

*F*}} can be described via tensor products of elements of

*V*âˆ—.If, in addition to vector addition and scalar multiplication, there is a bilinear vector product {{nowrap|

*V*Ã—

*V*â†’

*V*}}, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials).

### Topological vector spaces

{{expand section|date=September 2018}}Functional analysis mixes the methods of linear algebra with those of mathematical analysis and studies various function spaces, such as L*p*spaces.

### Homological algebra

{{expand section|date=September 2018}}## See also

- Linear equation over a ring
- Fundamental matrix in computer vision
- Linear regression, a statistical estimation method
- List of linear algebra topics
- Numerical linear algebra
- Linear programming
- Transformation matrix

## Notes

{{reflist|30em}}{{reflist|group=nb}}## Further reading

### History

- Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly
**86**(1979), pp. 809â€“817. - {{Citation|last=Grassmann|first= Hermann|authorlink=Hermann Grassmann| title=Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die Ã¼brigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erlÃ¤utert|publisher= O. Wigand|location= Leipzig|year= 1844}}

### Introductory textbooks

- {{Citation|last=Anton|first=Howard|year=2005|title=Elementary Linear Algebra (Applications Version)|publisher=Wiley International|edition=9th}}
- {{Citation | last = Banerjee | first = Sudipto | last2 = Roy | first2 = Anindya | date = 2014 | title = Linear Algebra and Matrix Analysis for Statistics | series = Texts in Statistical Science | publisher = Chapman and Hall/CRC | edition = 1st | isbn = 978-1420095388}}
- {{Citation|last=Bretscher|first=Otto|year=2004|title=Linear Algebra with Applications|publisher=Prentice Hall|edition=3rd|isbn=978-0-13-145334-0}}
- {hide}Citation|last=Farin|first=Gerald|last2=Hansford|first2=Dianne|

- {{Citation|last=Hefferon|first=Jim|year=2008|title=Linear Algebra|url=http://joshua.smcvt.edu/linearalgebra/}}
- {{Citation|last=Kolman|first=Bernard|last2=Hill|first2=David R.|year=2007|title=Elementary Linear Algebra with Applications|publisher=Prentice Hall|edition=9th|isbn=978-0-13-229654-0}}
- {{Citation|last=Lay|first=David C.|year=2005|title=Linear Algebra and Its Applications|publisher=Addison Wesley|edition=3rd|isbn=978-0-321-28713-7}}
- {{Citation|last=Leon|first=Steven J.|year=2006|title=Linear Algebra With Applications|publisher=Pearson Prentice Hall|edition=7th|isbn=978-0-13-185785-8}}
- Murty, Katta G. (2014)
*Computational and Algorithmic Linear Algebra and n-Dimensional Geometry*, World Scientific Publishing, {{isbn|978-981-4366-62-5}}.*Chapter 1: Systems of Simultaneous Linear Equations* - {{Citation|last=Poole|first=David|year=2010|title=Linear Algebra: A Modern Introduction|publisher=Cengage â€“ Brooks/Cole|edition=3rd|isbn=978-0-538-73545-2}}
- {{Citation|last=Ricardo|first=Henry|year=2010|title=A Modern Introduction To Linear Algebra|publisher=CRC Press|edition=1st|isbn=978-1-4398-0040-9}}
- {{Citation|last=Sadun|first=Lorenzo|year=2008|title=Applied Linear Algebra: the decoupling principle|publisher=AMS|edition=2nd|isbn=978-0-8218-4441-0}}
- {{Citation|last=Strang|first=Gilbert|authorlink=Gilbert Strang|year=2016|title=Introduction to Linear Algebra|publisher=Wellesley-Cambridge Press|edition=5th|isbn=978-09802327-7-6}}
- The Manga Guide to Linear Algebra (2012), by Shin Takahashi, Iroha Inoue and Trend-Pro Co., Ltd., {{isbn| 978-1-59327-413-9}}

### Advanced textbooks

- {{Citation|last=Axler|first=Sheldon|authorlink=Sheldon Axler|date=February 26, 2004|title=Linear Algebra Done Right|publisher=Springer|edition=2nd|isbn=978-0-387-98258-8}}
- {{Citation|last=Bhatia|first=Rajendra|date=November 15, 1996|title=Matrix Analysis|series=Graduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-94846-1}}
- {{Citation|last=Demmel|first=James W.|authorlink=James Demmel|date=August 1, 1997|title=Applied Numerical Linear Algebra|publisher=SIAM|isbn=978-0-89871-389-3}}
- {{Citation|last=Dym|first=Harry|year=2007|title=Linear Algebra in Action|publisher=AMS|isbn=978-0-8218-3813-6}}
- {{Citation|last=Gantmacher|first=Felix R.|authorlink = Felix Gantmacher|date=2005|title=Applications of the Theory of Matrices|publisher=Dover Publications|isbn=978-0-486-44554-0}}
- {{Citation|last=Gantmacher|first=Felix R.|year=1990|title=Matrix Theory Vol. 1|publisher=American Mathematical Society|edition=2nd|isbn=978-0-8218-1376-8}}
- {{Citation|last=Gantmacher|first=Felix R.|year=2000|title=Matrix Theory Vol. 2|publisher=American Mathematical Society|edition=2nd|isbn=978-0-8218-2664-5}}
- {{Citation|last=Gelfand|first=Israel M.|authorlink = Israel Gelfand|year=1989|title=Lectures on Linear Algebra|publisher=Dover Publications|isbn=978-0-486-66082-0}}
- {{Citation|last=Glazman|first=I. M.|last2=Ljubic|first2=Ju. I.|year=2006|title=Finite-Dimensional Linear Analysis|publisher=Dover Publications|isbn= 978-0-486-45332-3}}
- {{Citation|last=Golan|first=Johnathan S.|date=January 2007|title=The Linear Algebra a Beginning Graduate Student Ought to Know|publisher=Springer|edition=2nd|isbn=978-1-4020-5494-5}}
- {{Citation|last=Golan|first=Johnathan S.|date=August 1995|title=Foundations of Linear Algebra|publisher=Kluwer |isbn=0-7923-3614-3}}
- {{Citation|last=Golub|first=Gene H.|last2=Van Loan|first2=Charles F.|date=October 15, 1996|title=Matrix Computations|series=Johns Hopkins Studies in Mathematical Sciences|publisher=The Johns Hopkins University Press|edition=3rd|isbn=978-0-8018-5414-9}}
- {{Citation|last=Greub|first=Werner H.|date=October 16, 1981|title=Linear Algebra|series=Graduate Texts in Mathematics|publisher=Springer|edition=4th|isbn=978-0-8018-5414-9}}
- {{citation

| last1 = Hoffman | first1 = Kenneth

| last2 = Kunze | first2 = Ray | author2-link = Ray Kunze

| edition = 2nd

| location = Englewood Cliffs, N.J.

| mr = 0276251

| publisher = Prentice-Hall, Inc.

| title = Linear algebra

| year = 1971}}

| last2 = Kunze | first2 = Ray | author2-link = Ray Kunze

| edition = 2nd

| location = Englewood Cliffs, N.J.

| mr = 0276251

| publisher = Prentice-Hall, Inc.

| title = Linear algebra

| year = 1971}}

- {{Citation|last=Halmos|first=Paul R.|authorlink = Paul Halmos|date=August 20, 1993|title=Finite-Dimensional Vector Spaces|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-90093-3}}
- {{Citation|last=Friedberg|first=Stephen H.|last2=Insel|first2=Arnold J.|last3=Spence|first3=Lawrence E.|date=November 11, 2002|title=Linear Algebra|publisher=Prentice Hall|edition=4th|isbn=978-0-13-008451-4}}
- {{Citation|last=Horn|first=Roger A.|last2=Johnson|first2=Charles R.|date=February 23, 1990|title=Matrix Analysis|publisher=Cambridge University Press|isbn=978-0-521-38632-6}}
- {{Citation|last1=Horn|first1=Roger A.|last2=Johnson|first2=Charles R.|date=June 24, 1994|title=Topics in Matrix Analysis|publisher=Cambridge University Press|isbn=978-0-521-46713-1}}
- {{Citation|last=Lang|first=Serge|date=March 9, 2004|title=Linear Algebra|series=Undergraduate Texts in Mathematics|edition=3rd|publisher=Springer|isbn=978-0-387-96412-6}}
- {{Citation|last1=Marcus|first1=Marvin|last2=Minc|first2=Henryk|year=2010|title=A Survey of Matrix Theory and Matrix Inequalities|publisher=Dover Publications|isbn=978-0-486-67102-4}}
- {{Citation|last=Meyer |first=Carl D. |date=February 15, 2001 |title=Matrix Analysis and Applied Linear Algebra |publisher=Society for Industrial and Applied Mathematics (SIAM) |isbn=978-0-89871-454-8 |url=http://www.matrixanalysis.com/DownloadChapters.html |deadurl=yes |archiveurl=https://web.archive.org/web/20091031193126weblink |archivedate=October 31, 2009 |df= }}
- {{Citation|last1=Mirsky|first1=L.|authorlink=Leon Mirsky|year=1990|title=An Introduction to Linear Algebra|publisher= Dover Publications|isbn=978-0-486-66434-7}}
- {{Citation|last=Roman|first=Steven|date=March 22, 2005|title=Advanced Linear Algebra|edition=2nd|series=Graduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-24766-3}}
- {{Citation|last1=Shafarevich|first1 = I. R.|authorlink1 = Igor Shafarevich|first2 = A. O|last2=Remizov|title = Linear Algebra and Geometry|publisher = Springer|year=2012|url =weblink |isbn = 978-3-642-30993-9}}
- {{Citation|last=Shilov|first=Georgi E.|authorlink = Georgiy Shilov|date=June 1, 1977|publisher=Dover Publications|isbn=978-0-486-63518-7|title=Linear algebra}}
- {{Citation|last=Shores|first=Thomas S.|date=December 6, 2006|title=Applied Linear Algebra and Matrix Analysis|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-33194-2}}
- {{Citation|last=Smith|first=Larry|date=May 28, 1998|title=Linear Algebra|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-98455-1}}
- {{Citation|last=Trefethen|first=Lloyd N.|last2=Bau|first2=David|date=1997|title=Numerical Linear Algebra|publisher=SIAM|isbn=978-0-898-71361-9}}

### Study guides and outlines

- {{Citation|last=Leduc|first=Steven A.|date=May 1, 1996|title=Linear Algebra (Cliffs Quick Review)|publisher=Cliffs Notes|isbn=978-0-8220-5331-6}}
- {{Citation|last=Lipschutz|first=Seymour|last2=Lipson|first2=Marc|date=December 6, 2000|title=Schaum's Outline of Linear Algebra|publisher=McGraw-Hill|edition=3rd|isbn=978-0-07-136200-9}}
- {{Citation|last=Lipschutz|first=Seymour|date=January 1, 1989|title=3,000 Solved Problems in Linear Algebra|publisher=McGrawâ€“Hill|isbn=978-0-07-038023-3}}
- {{Citation|last=McMahon|first=David|date=October 28, 2005|title=Linear Algebra Demystified|publisher=McGrawâ€“Hill Professional|isbn=978-0-07-146579-3}}
- {{Citation|last=Zhang|first=Fuzhen|date=April 7, 2009|title=Linear Algebra: Challenging Problems for Students|publisher=The Johns Hopkins University Press|isbn=978-0-8018-9125-0}}

## External links

### Online Resources

- MIT Linear Algebra Video Lectures, a series of 34 recorded lectures by professor Gilbert Strang (Spring 2010)
- International Linear Algebra Society
- {{springer|title=Linear algebra|id=p/l059040}}
- Linear Algebra on MathWorld.
- Matrix and Linear Algebra Terms on Earliest Known Uses of Some of the Words of Mathematics
- Earliest Uses of Symbols for Matrices and Vectors on Earliest Uses of Various Mathematical Symbols
- Essence of linear algebra, a video presentation of the basics of linear algebra, with emphasis on the relationship between the geometric, the matrix and the abstract points of view

### Online books

- Beezer, Rob,
*A First Course in Linear Algebra* - Connell, Edwin H.,
*Elements of Abstract and Linear Algebra* - Hefferon, Jim,
*Linear Algebra* - Matthews, Keith,
*Elementary Linear Algebra* - Sharipov, Ruslan,
*Course of linear algebra and multidimensional geometry* - Treil, Sergei,
*Linear Algebra Done Wrong*

**- content above as imported from Wikipedia**

- "

- time: 10:40pm EST - Thu, Jan 17 2019

- "

__linear algebra__" does not exist on GetWiki (yet)- time: 10:40pm EST - Thu, Jan 17 2019

[ this remote article is provided by Wikipedia ]

LATEST EDITS [ see all ]

GETWIKI 09 MAY 2016

GETWIKI 18 OCT 2015

GETWIKI 20 AUG 2014

GETWIKI 19 AUG 2014

GETWIKI 18 AUG 2014

© 2019 M.R.M. PARROTT | ALL RIGHTS RESERVED