SUPPORT THE WORK

GetWiki

linear algebra

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
linear algebra
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Distinguish|Elementary algebra}}File:Linear subspaces with shading.svg|thumb|250px|right|In the three-dimensional Euclidean spaceEuclidean spaceLinear algebra is the branch of mathematics concerning linear equations such as
a_1x_1+cdots +a_nx_n=b,
linear functions such as
(x_1, ldots, x_n) mapsto a_1x_1+ldots +a_nx_n,
and their representations through matrices and vector spaces.{{Citation | last = Banerjee | first = Sudipto | last2 = Roy | first2 = Anindya | date = 2014 | title = Linear Algebra and Matrix Analysis for Statistics | series = Texts in Statistical Science | publisher = Chapman and Hall/CRC | edition = 1st | isbn = 978-1420095388}}{{Citation|last=Strang|first=Gilbert|date=July 19, 2005|title=Linear Algebra and Its Applications|publisher=Brooks Cole|edition=4th|isbn=978-0-03-010567-8}}WEB, Weisstein, Eric, Linear Algebra,weblink From MathWorld--A Wolfram Web Resource., Wolfram, 16 April 2012, Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis may be basically viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and engineering areas, because it allows modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems, which cannot be modeled with linear algebra, linear algebra is often used as a first-order approximation.

History

From the study of determinants and matrices to modern linear algebra

The study of linear algebra first emerged from the introduction of determinants, for solving systems of linear equations. Determinants were considered by Leibniz in 1693, and subsequently, in 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's Rule. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy.WEB, Vitulli, Marie, Marie A. Vitulli, A Brief History of Linear Algebra and Matrix Theory,weblink Department of Mathematics, University of Oregon,weblink" title="web.archive.org/web/20120910034016weblink">weblink 2012-09-10, 2014-07-08, The study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for "womb". While studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".In 1882, Hüseyin Tevfik Pasha wrote the book titled "Linear Algebra"weblinkweblink Linear Algebra, Hussein Tevfik, 21 April 1882, A.H. Boyajian, 21 April 2018, Internet Archive, The first modern and more precise definition of a vector space was introduced by Peano in 1888; by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century, when many ideas and methods of previous centuries were generalized as abstract algebra. The use of matrices in quantum mechanics, special relativity, and statistics helped spread the subject of linear algebra beyond pure mathematics. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modelling and simulations.The origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination.

Educational history

Linear algebra first appeared in American graduate textbooks in the 1940s and in undergraduate textbooks in the 1950s.JOURNAL, Tucker, Alan, Alan Tucker, The Growing Importance of Linear Algebra in Undergraduate Mathematics, College Mathematics Journal, 1993, 24, 1, 3–9, 10.2307/2686426, Following work by the School Mathematics Study Group, U.S. high schools asked 12th grade students to do "matrix algebra, formerly reserved for college" in the 1960s.WEB, Goodlad, John I., von stoephasius, Reneta, Klein, M. Frances, The changing school curriculum,weblink U.S. Department of Health, Education, and Welfare: Office of Education, 9 July 2014, 1966, In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school. This was met with a backlash in the 1980s that removed linear algebra from the curriculum.BOOK, Dorier, Jean-Luc, Robert, Aline, Robinet, Jacqueline, Rogalski, Marc, Dorier, Jean-Luc, The Obstacle of Formalism in Linear Algebra, 2000, Springer, 978-0-7923-6539-6, 85–124,weblink 9 July 2014, In 1993, the U.S.-based Linear Algebra Curriculum Study Group recommended that undergraduate linear algebra courses be given an application-based "matrix orientation" as opposed to a theoretical orientation.JOURNAL, Carlson, David, Johnson, Charles R., Lay, David C., Porter, A. Duane, The Linear Algebra Curriculum Study Group Recommendations for the First Course in Linear Algebra, The College Mathematics Journal, 1993, 24, 1, 41–46, 10.2307/2686430, Reviews of the teaching of linear algebra call for stress on visualization and geometric interpretation of theoretical ideas,Carol S. Schumacher, Martha J. Siegel, and Paul Zorn (2015) 2015 CUPM Curriculum Guide to Majors in the Mathematical Sciences. The Mathematical Association of America. department-guidelines-recommendations/cupm and to include the jewel in the crown of linear algebra, the singular-value decomposition (SVD), as 'so many other disciplines use it'.Peter R. Turner et al. (2015) Modeling across the Curriculum II. Report on the second SIAM-NSF Workshop, Alexandria, VA. weblink To better suit 21st century applications, such as data mining and uncertainty analysis, linear algebra can be based upon the SVD instead of Gaussian Elimination.WEB,weblink Professor SVD, au.mathworks.com, 21 April 2018, A. J. Roberts (2017) Linear Algebra Reformed for 21st-C Application. weblink

Scope of study

Vector spaces

The main structures of linear algebra are vector spaces. A vector space over a field F (often the field of the real numbers) is a set V equipped with two binary operations satisfying the following axioms. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector {{nowrap|v + w}}. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new {{nowrap|vector av}}. The operations of addition and multiplication in a vector space must satisfy the following axioms.{{Harvard citations|last=Roman|year=2005|nb=yes|loc=ch. 1, p. 27}} In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F.{| border="0" style="width:100%;"
Axiom >|Signification
Associativity of addition >| u + (v + w) = (u + v) + w
style="background:#F8F4FF;"
Commutativity of addition >| u + v = v + u
Identity element of addition >| There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.
style="background:#F8F4FF;"
Inverse elements of addition >| For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0
Distributivity of scalar multiplication with respect to vector addition  >| a(u + v) = au + av
style="background:#F8F4FF;"
| (a + b)v = av + bv
| a(bv) = (ab)v This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplication: bv; and field multiplication: ab.
style="background:#F8F4FF;"
| 1v = v, where 1 denotes the multiplicative identity in F.
The first four axioms are those of V being an abelian group under vector addition. Elements of a vector space may have various nature; for example, they can be sequences, functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces.

Linear transformations

Similarly as in the theory of other algebraic structures, linear algebra studies mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear transformation (also called linear map, linear mapping or linear operator) is a map
T:Vto W
that is compatible with addition and scalar multiplication:
T(u+v)=T(u)+T(v), quad T(av)=aT(v)
for any vectors u,v in V and a scalar a in F.This implies that for any vectors u, v in V and scalars a, b in F, one has
T(au+bv)=T(au)+T(bv)=aT(u)+bT(v)
When a bijective linear mapping exists between two vector spaces (that is, every vector from the second space is associated with exactly one in the first), we say that the two spaces are isomorphic. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view. An essential question in linear algebra is testing whether a mapping is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that get mapped to zero, called the kernel of the mapping. All these questions can be solved by using Gaussian elimination or some variant of this algorithm.

Subspaces, span, and basis

Again, in analogue with theories of other algebraic objects, linear algebra is interested in subsets of vector spaces that are themselves vector spaces; these subsets are called linear subspaces. For example, both the range and kernel of a linear mapping are subspaces, and are thus often called the range space and the nullspace; these are important examples of subspaces. Another important way of forming a subspace is to take a linear combination of a set of vectors v1, v2, ..., vk:
a_1 v_1 + a_2 v_2 + cdots + a_k v_k,
where a1, a2, ..., ak are scalars. The set of all linear combinations of vectors v1, v2, ..., vk is called their span, which forms a subspace.A linear combination of any system of vectors with all zero coefficients is the zero vector of V. If this is the only way to express the zero vector as a linear combination of v1, v2, ..., vk then these vectors are linearly independent. Given a set of vectors that span a space, if any vector w is a linear combination of other vectors (and so the set is not linearly independent), then the span would remain the same if we remove w from the set. Thus, a set of linearly dependent vectors is redundant in the sense that there will be a linearly independent subset which will span the same subspace. Therefore, we are mostly interested in a linearly independent set of vectors that spans a vector space V, which we call a basis of V. Any set of vectors that spans V contains a basis, and any linearly independent set of vectors in V can be extended to a basis.Axler (2004), pp. 28–29 It turns out that if we accept the axiom of choice, every vector space has a basis;The existence of a basis is straightforward for countably generated vector spaces, and for well-ordered vector spaces, but in full generality it is logically equivalent to the axiom of choice. nevertheless, this basis may be unnatural, and indeed, may not even be constructible. For instance, there exists a basis for the real numbers, considered as a vector space over the rationals, but no explicit basis has been constructed.Any two bases of a vector space V have the same cardinality, which is called the dimension of V. The dimension of a vector space is well-defined by the dimension theorem for vector spaces. If a basis of V has finite number of elements, V is called a finite-dimensional vector space. If V is finite-dimensional and U is a subspace of V, then dim U ≤ dim V. If U1 and U2 are subspaces of V, then
dim(U_1 + U_2) = dim U_1 + dim U_2 - dim(U_1 cap U_2).Axler (2204), p. 33
One often restricts consideration to finite-dimensional vector spaces. A fundamental theorem of linear algebra states that all vector spaces of the same dimension are isomorphic,Axler (2004), p. 55 giving an easy way of characterizing isomorphism.

Matrix theory

A particular basis {v1, v2, ..., vn} of V allows one to construct a coordinate system in V: the vector with coordinates (a1, a2, ..., an) is the linear combination
a_1 v_1 + a_2 v_2 + cdots + a_n v_n. ,
The condition that v1, v2, ..., vn span V guarantees that each vector v can be assigned coordinates, whereas the linear independence of v1, v2, ..., vn assures that these coordinates are unique (i.e. there is only one linear combination of the basis vectors that is equal to v). In this way, once a basis of a vector space V over F has been chosen, V may be identified with the coordinate n-space Fn. Under this identification, addition and scalar multiplication of vectors in V correspond to addition and scalar multiplication of their coordinate vectors in Fn. Furthermore, if V and W are an n-dimensional and m-dimensional vector space over F, and a basis of V and a basis of W have been fixed, then any linear transformation T: V → W may be encoded by an m × n matrix A with entries in the field F, called the matrix of T with respect to these bases. Two matrices that encode the same linear transformation in different bases are called similar. Matrix theory replaces the study of linear transformations, which were defined axiomatically, by the study of matrices, which are more concrete objects. This major technique distinguishes linear algebra from theories of other algebraic structures, which usually cannot be parameterized so explicitly.There is an important distinction between the coordinate n-space Rn and a general finite-dimensional vector space V. While Rn has a standard basis {e1, e2, ..., en}, a vector space V typically does not come equipped with such a basis and many different bases exist (although they all consist of the same number of elements equal to the dimension of V).Matrix theory includes the theory of determinants, a fundamental concept in linear algebra. While determinants could be defined in a basis-free manner, they are usually introduced via a specific representation of the mapping; the value of the determinant does not depend on the specific basis. It turns out that a mapping has an inverse if and only if the determinant has an inverse (every non-zero real or complex number has an inverseIf we restrict to integers, then only 1 and -1 have an inverse. Consequently, the inverse of an integer matrix is an integer matrix if and only if the determinant is 1 or -1.). If the determinant is zero, then the nullspace is nontrivial. Determinants have other applications, including a systematic way of seeing if a set of vectors is linearly independent (when the vectors are written as the columns of a matrix, they are independent if the determinant of this matrix is zero). Determinants could also be used to solve systems of linear equations (see Cramer's rule), but in practice, Gaussian elimination is a faster method.

Linear systems

Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory has been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems.For example, let
begin{alignat}{7}
2x &&; + ;&& y &&; - ;&& z &&; = ;&& 8 -3x &&; - ;&& y &&; + ;&& 2z &&; = ;&& -11 -2x &&; + ;&& y &&; +;&& 2z &&; = ;&& -3 end{alignat}qquad text{(S)}be a linear system. To such a system, one may associate its matrix
Mleft[begin{array}{rrr}
2 & 1 & -1-3 & -1 & 2 -2 & 1 & 2end{array}right]text{.}and its right member vector
v=begin{bmatrix}
8-113end{bmatrix}.Let {{mvar|T}} be the linear transformation associated to the matrix {{mvar|M}}. A solution of the system {{math|(S)}} is a vector
X=begin{bmatrix}
xyzend{bmatrix} such that
T(X)=v,
that is an element of the preimage of {{mvar|v}} by {{mvar|T}}.Let {{math|(S')}} be the associated homogeneous system, where the right-hand sides of the equations are put to zero. The solutions of {{math|(S')}} are exactly the elements of the kernel of {{math|T}} or, equivalently, {{mvar|M}}.The Gaussian-elimination consists of performing elementary row operations on the augmented matrix
Mleft[begin{array}{rrr|r}
2 & 3 & -1&8-3 & -1 & 2&-11 -2 & 1 & 2&-3end{array}right]for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is
Mleft[begin{array}{rrr|r}
1 & 0 & 0&2end{array}right],showing that the system {{math|(S)}} has the unique solution
begin{align}x&=2y&=3z&=-1.end{align}
It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses.

Eigenvalues and eigenvectors

In general, the action of a linear transformation may be quite complex. Attention to low-dimensional examples gives an indication of the variety of their types. One strategy for a general n-dimensional transformation T is to find "characteristic lines" that are invariant sets under T. If v is a non-zero vector such that Tv is a scalar multiple of v, then the line through 0 and v is an invariant set under T, and v is called a characteristic vector or eigenvector. The scalar λ such that Tv = λv is called a characteristic value or eigenvalue of T.To find an eigenvector or an eigenvalue, one can note that
Tv-lambda v=(T-lambda , text{I})v=0,
where I is the identity matrix. Thus, there are nontrivial solutions to this equation, if and only if the determinant det(T − λ I) is zero. As this determinant is a polynomial, the eigenvalues are not guaranteed to exist in the field R of real numbers. Thus, eigenvalues are generally searched in the complex numbers or in an algebraically closed field. The fundamental theorem of algebra and the definition of an algebraically closed fields ensure the existence of such eigenvalues. A linear transformation T from a vector space V into itself is particularly nice, if is a basis for V consisting of eigenvectors. In this case, the linear transformation is said diagonalizable. Over such a basis, the action of the transformation is particularly simple: if v1, v2, ..., vn is a basis of eigenvectors of the linear transformation T, with respective eigenvalues (not necessarily distinct) λ1, λ2, ..., λn, then, for every vector {{nowrap|1=v = a1v1 + ... + an vn}}, one has
T(a_1 v_1 + cdots +a_n)=a_1 lambda_1 v_1 + cdots +a_n lambda_n v_n.
Such a transformation is called a diagonalizable matrix, since, over the basis of eigenvectors, the transformation is represented by a diagonal matrix. Because operations like matrix multiplication, matrix inversion, and determinant calculation are simple on diagonal matrices, computations involving matrices are much simpler if we can bring the matrix to a diagonal form. Not all matrices are diagonalizable (even over an algebraically closed field); a simple example is the matrix
begin{bmatrix}
end{bmatrix},which has only one eigenvector, the vector {{math|(0, 0)}}.

Inner-product spaces

Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map
langle cdot, cdot rangle : V times V rightarrow F
that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F:BOOK, Functional analysis, P. K. Jain, Khalil Ahmad,weblink 203, 5.1 Definitions and basic properties of inner product spaces and Hilbert spaces, 81-224-0801-X, 1995, 2nd, New Age International, BOOK, Quantum mechanics in Hilbert space, Eduard Prugovec̆ki,weblink Definition 2.1, 18 ff, 0-12-566060-X, 1981, Academic Press, 2nd,

langle u,vrangle =overline{langle v,urangle}.
Note that in R, it is symmetric.
  • Linearity in the first argument:


langle au,vrangle= a langle u,vrangle. langle u+v,wrangle= langle u,wrangle+ langle v,wrangle.


langle v,vrangle geq 0 with equality only for v = 0.
We can define the length of a vector v in V by
|v|^2=langle v,vrangle,
and we can prove the Cauchy–Schwarz inequality:
|langle u,vrangle| leq |u| cdot |v|.
In particular, the quantity
frac{|langle u,vrangle|}{|u| cdot |v|} leq 1,
and so we can call this quantity the cosine of the angle between the two vectors.Two vectors are orthogonal if langle u, vrangle =0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly nice to deal with, since if v = a1 v1 + ... + an vn, then a_i = langle v,v_i rangle.The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying
langle T u, v rangle = langle u, T^* vrangle.
If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V.

Some main useful theorems

  • A matrix is invertible, or non-singular, if and only if the linear map represented by the matrix is an isomorphism.
  • Any vector space over a field F of dimension n is isomorphic to Fn as a vector space over F.
  • Corollary: Any two vector spaces over F of the same finite dimension are isomorphic to each other.
  • A linear map is an isomorphism if and only if the determinant is nonzero.

Applications

Because of the ubiquity of vector spaces, linear algebra is used in many fields of mathematics, natural sciences, computer science, and social science. Below are just some examples of applications of linear algebra.

Least-squares best-fit line

The least squares method is used to determine the best-fit line for a set of data.WEB, Miller, Steven, The Method of Least Squares,weblink Brown University, 1 May 2013, This line will minimize the sum of the squares of the residuals.

Fourier series expansion

Fourier series are a representation of a function f: [−π, π] → R as a trigonometric series:
f(x)=frac{a_0}{2} + sum_{n=1}^infty , [a_n cos(nx) + b_n sin(nx)].
This series expansion is extremely useful in solving partial differential equations. In this article, we will not be concerned with convergence issues; it is nice to note that all Lipschitz-continuous functions have a converging Fourier series expansion, and nice enough discontinuous functions have a Fourier series that converges to the function value at most points.The space of all functions that can be represented by a Fourier series form a vector space (technically speaking, we call functions that have the same Fourier series expansion the "same" function, since two different discontinuous functions might have the same Fourier series). Moreover, this space is also an inner product space with the inner product
langle f,g rangle= frac{1}{pi} int_{-pi}^pi f(x) g(x) , dx.
The functions gn(x) = sin(nx) for n > 0 and hn(x) = cos(nx) for n ≥ 0 are an orthonormal basis for the space of Fourier-expandable functions. We can thus use the tools of linear algebra to find the expansion of any function in this space in terms of these basis functions. For instance, to find the coefficient ak, we take the inner product with hk:
langle f,h_k rangle=frac{a_0}{2}langle h_0,h_k rangle + sum_{n=1}^infty , [a_n langle h_n,h_krangle + b_n langle g_n,h_k rangle],
and by orthonormality, langle f,h_krangle=a_k; that is,
a_k = frac{1}{pi} int_{-pi}^pi f(x) cos(kx) , dx.

Quantum mechanics

Quantum mechanics is highly inspired by notions in linear algebra. In quantum mechanics, the physical state of a particle is represented by a vector, and observables (such as momentum, energy, and angular momentum) are represented by linear operators on the underlying vector space. More concretely, the wave function of a particle describes its physical state and lies in the vector space L2 (the functions φ: R3 → C such that int_{-infty}^infty int_{-infty}^infty int_{-infty}^{infty} |phi|^2 dxdydz is finite), and it evolves according to the Schrödinger equation. Energy is represented as the operator H=-frac{hbar^2}{2m} nabla^2 + V(x,y,z), where V is the potential energy. H is also known as the Hamiltonian operator. The eigenvalues of H represent the possible energies that can be observed. Given a particle in some state φ, we can expand φ into a linear combination of eigenstates of H. The component of φ in each eigenstate determines the probability of measuring the corresponding eigenvalue, and the measurement forces the particle to assume that eigenstate (wave function collapse).

Geometric introduction

{{cleanup|section|reason=This section uses nonstandard notation, repeats things that appear earlier in the article, and says almost nothing about what should be its main subject, namely the relationship between linear algebra and geometry|date=August 2018}}Many of the principles and techniques of linear algebra can be seen in the geometry of lines in a real two-dimensional plane E. When formulated using vectors and matrices the geometry of points and lines in the plane can be extended to the geometry of points and hyperplanes in high-dimensional spaces.Point coordinates in the plane E are ordered pairs of real numbers, (x,y), and a line is defined as the set of points (x,y) that satisfy the linear equation{{Citation|last=Strang|first=Gilbert|date=July 19, 2005|title=Linear Algebra and Its Applications|publisher=Brooks Cole|edition=4th|isbn=978-0-03-010567-8}},
lambda: ax+by + c =0,
where a, b and c are not all zero.Then,
lambda: begin{bmatrix} a & b & cend{bmatrix} begin{bmatrix} x y 1end{bmatrix} = 0,
or
Amathbf{x}=0,
where x = (x, y, 1) is the 3 × 1 set of homogeneous coordinates associated with the point (x, y).J. G. Semple and G. T. Kneebone, Algebraic Projective Geometry, Clarendon Press, London, 1952.Homogeneous coordinates identify the plane E with the z = 1 plane in three-dimensional space. The x−y coordinates in E are obtained from homogeneous coordinates y = (y1, y2, y3) by dividing by the third component (if it is nonzero) to obtain y = (y1/y3, y2/y3, 1).The linear equation, λ, has the important property, that if x1 and x2 are homogeneous coordinates of points on the line, then the point αx1 + βx2 is also on the line, for any real α and β.Now consider the equations of the two lines λ1 and λ2,
lambda_1: a_1 x+b_1 y + c_1 =0,quad lambda_2: a_2 x+b_2 y + c_2 =0,
which forms a system of linear equations. The intersection of these two lines is defined by x = (x, y, 1) that satisfy the matrix equation,
lambda_{1,2}: begin{bmatrix} a_1 & b_1 & c_1 a_2 & b_2 & c_2 end{bmatrix} begin{bmatrix} x y 1end{bmatrix} = begin{bmatrix}0 end{bmatrix},
or using homogeneous coordinates,
Bmathbf{x}=0.
The point of intersection of these two lines is the unique non-zero solution of these equations. In homogeneous coordinates,the solutions are multiples of the following solution:
x_1 = begin{vmatrix} b_1 & c_1 b_2 & c_2end{vmatrix}, x_2 = -begin{vmatrix} a_1 & c_1 a_2 & c_2end{vmatrix}, x_3 = begin{vmatrix} a_1 & b_1 a_2 & b_2end{vmatrix}
if the rows of B are linearly independent (i.e., λ1 and λ2 represent distinct lines).Divide through by x3 to get Cramer's rule for the solution of a set of two linear equations in two unknowns.E. D. Nering, Linear Algebra and Matrix Theory, John-Wiley, New York, NY, 1963 Notice that this yields a point in the z = 1 plane only when the 2 × 2 submatrix associated with x3 has a non-zero determinant.It is interesting to consider the case of three lines, λ1, λ2 and λ3, which yield the matrix equation,
lambda_{1,2,3}: begin{bmatrix} a_1 & b_1 & c_1 a_2 & b_2 & c_2 a_3 & b_3 & c_3end{bmatrix} begin{bmatrix} x y 1end{bmatrix} = begin{bmatrix}0 end{bmatrix}.
which in homogeneous form yields,
Cmathbf{x}=0.
Clearly, this equation has the solution x = (0,0,0), which is not a point on the z = 1 plane E. For a solution to exist in the plane E, the coefficient matrix C must have rank 2, which means its determinant must be zero. Another way to say this is that the columns of the matrix must be linearly dependent.

Generalizations and related topics

Since linear algebra is a successful theory, its methods have been developed and generalized in other parts of mathematics. In module theory, one replaces the field of scalars by a ring. The concepts of linear independence, span, basis, and dimension (which is called rank in module theory) still make sense. Nevertheless, many theorems from linear algebra become false in module theory. For instance, not all modules have a basis (those that do are called free modules), the rank of a free module is not necessarily unique, not every linearly independent subset of a module can be extended to form a basis, and not every subset of a module that spans the space contains a basis.In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V∗ consisting of linear maps {{nowrap|f: V → F}} where F is the field of scalars. Multilinear maps {{nowrap|T: Vn → F}} can be described via tensor products of elements of V∗.If, in addition to vector addition and scalar multiplication, there is a bilinear vector product {{nowrap|V × V → V}}, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials).Functional analysis mixes the methods of linear algebra with those of mathematical analysis and studies various function spaces, such as Lp spaces.Representation theory studies the actions of algebraic objects on vector spaces by representing these objects as matrices. It is interested in all the ways that this is possible, and it does so by finding subspaces invariant under all transformations of the algebra. The concept of eigenvalues and eigenvectors is especially important.Algebraic geometry considers the solutions of systems of polynomial equations.There are several related topics in the field of computer programming that utilize much of the techniques and theorems linear algebra encompasses and refers to.

See also

Notes

{{reflist|30em}}{{reflist|group=nb}}

Further reading

History

  • Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly 86 (1979), pp. 809–817.
  • {{Citation|last=Grassmann|first= Hermann|authorlink=Hermann Grassmann| title=Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert|publisher= O. Wigand|location= Leipzig|year= 1844}}

Introductory textbooks

  • {{Citation | last = Banerjee | first = Sudipto | last2 = Roy | first2 = Anindya | date = 2014 | title = Linear Algebra and Matrix Analysis for Statistics | series = Texts in Statistical Science | publisher = Chapman and Hall/CRC | edition = 1st | isbn = 978-1420095388}}
  • {{Citation|last=Strang|first=Gilbert|authorlink=Gilbert Strang|date=May 2016|title=Introduction to Linear Algebra|publisher=Wellesley-Cambridge Press|edition=5th|isbn=978-09802327-7-6}}
  • Murty, Katta G. (2014) Computational and Algorithmic Linear Algebra and n-Dimensional Geometry, World Scientific Publishing, {{isbn|978-981-4366-62-5}}. Chapter 1: Systems of Simultaneous Linear Equations
  • {{Citation|last=Bretscher|first=Otto|date=June 28, 2004|title=Linear Algebra with Applications|publisher=Prentice Hall|edition=3rd|isbn=978-0-13-145334-0}}
  • {hide}Citation|last=Farin|first=Gerald|last2=Hansford|first2=Dianne|
date=December 15, 2004|title=Practical Linear Algebra: A Geometry Toolbox|publisher=AK Peters|isbn=978-1-56881-234-2{edih}
  • {{Citation|last=Hefferon|first=Jim|year=2008|title=Linear Algebra|url=http://joshua.smcvt.edu/linearalgebra/}}
  • {{Citation|last=Anton|first=Howard|year=2005|title=Elementary Linear Algebra (Applications Version)|publisher=Wiley International|edition=9th}}
  • {{Citation|last=Lay|first=David C.|date=August 22, 2005|title=Linear Algebra and Its Applications|publisher=Addison Wesley|edition=3rd|isbn=978-0-321-28713-7}}
  • {{Citation|last=Kolman|first=Bernard|last2=Hill|first2=David R.|date=May 3, 2007|title=Elementary Linear Algebra with Applications|publisher=Prentice Hall|edition=9th|isbn=978-0-13-229654-0}}
  • {{Citation|last=Leon|first=Steven J.|year=2006|title=Linear Algebra With Applications|publisher=Pearson Prentice Hall|edition=7th|isbn=978-0-13-185785-8}}
  • {{Citation|last=Poole|first=David|year=2010|title=Linear Algebra: A Modern Introduction|publisher=Cengage â€“ Brooks/Cole|edition=3rd|isbn=978-0-538-73545-2}}
  • {{Citation|last=Ricardo|first=Henry|year=2010|title=A Modern Introduction To Linear Algebra|publisher=CRC Press|edition=1st|isbn=978-1-4398-0040-9}}
  • {{Citation|last=Sadun|first=Lorenzo|year=2008|title=Applied Linear Algebra: the decoupling principle|publisher=AMS|edition=2nd|isbn=978-0-8218-4441-0}}

Advanced textbooks

  • {{Citation|last=Axler|first=Sheldon|authorlink=Sheldon Axler|date=February 26, 2004|title=Linear Algebra Done Right|publisher=Springer|edition=2nd|isbn=978-0-387-98258-8}}
  • {{Citation|last=Bhatia|first=Rajendra|date=November 15, 1996|title=Matrix Analysis|series=Graduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-94846-1}}
  • {{Citation|last=Demmel|first=James W.|authorlink=James Demmel|date=August 1, 1997|title=Applied Numerical Linear Algebra|publisher=SIAM|isbn=978-0-89871-389-3}}
  • {{Citation|last=Dym|first=Harry|year=2007|title=Linear Algebra in Action|publisher=AMS|isbn=978-0-8218-3813-6}}
  • {{Citation|last=Gantmacher|first=Felix R.|authorlink = Felix Gantmacher|date=2005|title=Applications of the Theory of Matrices|publisher=Dover Publications|isbn=978-0-486-44554-0}}
  • {{Citation|last=Gantmacher|first=Felix R.|year=1990|title=Matrix Theory Vol. 1|publisher=American Mathematical Society|edition=2nd|isbn=978-0-8218-1376-8}}
  • {{Citation|last=Gantmacher|first=Felix R.|year=2000|title=Matrix Theory Vol. 2|publisher=American Mathematical Society|edition=2nd|isbn=978-0-8218-2664-5}}
  • {{Citation|last=Gelfand|first=Israel M.|authorlink = Israel Gelfand|year=1989|title=Lectures on Linear Algebra|publisher=Dover Publications|isbn=978-0-486-66082-0}}
  • {{Citation|last=Glazman|first=I. M.|last2=Ljubic|first2=Ju. I.|year=2006|title=Finite-Dimensional Linear Analysis|publisher=Dover Publications|isbn= 978-0-486-45332-3}}
  • {{Citation|last=Golan|first=Johnathan S.|date=January 2007|title=The Linear Algebra a Beginning Graduate Student Ought to Know|publisher=Springer|edition=2nd|isbn=978-1-4020-5494-5}}
  • {{Citation|last=Golan|first=Johnathan S.|date=August 1995|title=Foundations of Linear Algebra|publisher=Kluwer |isbn=0-7923-3614-3}}
  • {{Citation|last=Golub|first=Gene H.|last2=Van Loan|first2=Charles F.|date=October 15, 1996|title=Matrix Computations|series=Johns Hopkins Studies in Mathematical Sciences|publisher=The Johns Hopkins University Press|edition=3rd|isbn=978-0-8018-5414-9}}
  • {{Citation|last=Greub|first=Werner H.|date=October 16, 1981|title=Linear Algebra|series=Graduate Texts in Mathematics|publisher=Springer|edition=4th|isbn=978-0-8018-5414-9}}
  • {{citation


| last1 = Hoffman | first1 = Kenneth
| last2 = Kunze | first2 = Ray | author2-link = Ray Kunze
| edition = 2nd
| location = Englewood Cliffs, N.J.
| mr = 0276251
| publisher = Prentice-Hall, Inc.
| title = Linear algebra
| year = 1971}}
  • {{Citation|last=Halmos|first=Paul R.|authorlink = Paul Halmos|date=August 20, 1993|title=Finite-Dimensional Vector Spaces|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-90093-3}}
  • {{Citation|last=Friedberg|first=Stephen H.|last2=Insel|first2=Arnold J.|last3=Spence|first3=Lawrence E.|date=November 11, 2002|title=Linear Algebra|publisher=Prentice Hall|edition=4th|isbn=978-0-13-008451-4}}
  • {{Citation|last=Horn|first=Roger A.|last2=Johnson|first2=Charles R.|date=February 23, 1990|title=Matrix Analysis|publisher=Cambridge University Press|isbn=978-0-521-38632-6}}
  • {{Citation|last1=Horn|first1=Roger A.|last2=Johnson|first2=Charles R.|date=June 24, 1994|title=Topics in Matrix Analysis|publisher=Cambridge University Press|isbn=978-0-521-46713-1}}
  • {{Citation|last=Lang|first=Serge|date=March 9, 2004|title=Linear Algebra|series=Undergraduate Texts in Mathematics|edition=3rd|publisher=Springer|isbn=978-0-387-96412-6}}
  • {{Citation|last1=Marcus|first1=Marvin|last2=Minc|first2=Henryk|year=2010|title=A Survey of Matrix Theory and Matrix Inequalities|publisher=Dover Publications|isbn=978-0-486-67102-4}}
  • {{Citation|last=Meyer |first=Carl D. |date=February 15, 2001 |title=Matrix Analysis and Applied Linear Algebra |publisher=Society for Industrial and Applied Mathematics (SIAM) |isbn=978-0-89871-454-8 |url=http://www.matrixanalysis.com/DownloadChapters.html |deadurl=yes |archiveurl=https://web.archive.org/web/20091031193126weblink |archivedate=October 31, 2009 |df= }}
  • {{Citation|last1=Mirsky|first1=L.|authorlink=Leon Mirsky|year=1990|title=An Introduction to Linear Algebra|publisher= Dover Publications|isbn=978-0-486-66434-7}}
  • {{Citation|last=Roman|first=Steven|date=March 22, 2005|title=Advanced Linear Algebra|edition=2nd|series=Graduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-24766-3}}
  • {{Citation|last1=Shafarevich|first1 = I. R.|authorlink1 = Igor Shafarevich|first2 = A. O|last2=Remizov|title = Linear Algebra and Geometry|publisher = Springer|year=2012|url =weblink |isbn = 978-3-642-30993-9}}
  • {{Citation|last=Shilov|first=Georgi E.|authorlink = Georgiy Shilov|date=June 1, 1977|publisher=Dover Publications|isbn=978-0-486-63518-7|title=Linear algebra}}
  • {{Citation|last=Shores|first=Thomas S.|date=December 6, 2006|title=Applied Linear Algebra and Matrix Analysis|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-33194-2}}
  • {{Citation|last=Smith|first=Larry|date=May 28, 1998|title=Linear Algebra|series=Undergraduate Texts in Mathematics|publisher=Springer|isbn=978-0-387-98455-1}}
  • {{Citation|last=Trefethen|first=Lloyd N.|last2=Bau|first2=David|date=1997|title=Numerical Linear Algebra|publisher=SIAM|isbn=978-0-898-71361-9}}

Study guides and outlines

  • {{Citation|last=Leduc|first=Steven A.|date=May 1, 1996|title=Linear Algebra (Cliffs Quick Review)|publisher=Cliffs Notes|isbn=978-0-8220-5331-6}}
  • {{Citation|last=Lipschutz|first=Seymour|last2=Lipson|first2=Marc|date=December 6, 2000|title=Schaum's Outline of Linear Algebra|publisher=McGraw-Hill|edition=3rd|isbn=978-0-07-136200-9}}
  • {{Citation|last=Lipschutz|first=Seymour|date=January 1, 1989|title=3,000 Solved Problems in Linear Algebra|publisher=McGraw–Hill|isbn=978-0-07-038023-3}}
  • {{Citation|last=McMahon|first=David|date=October 28, 2005|title=Linear Algebra Demystified|publisher=McGraw–Hill Professional|isbn=978-0-07-146579-3}}
  • {{Citation|last=Zhang|first=Fuzhen|date=April 7, 2009|title=Linear Algebra: Challenging Problems for Students|publisher=The Johns Hopkins University Press|isbn=978-0-8018-9125-0}}

External links

Online Resources

Online books

{{Linear algebra}}{{Areas of mathematics}}{{Authority control}}

- content above as imported from Wikipedia
- "linear algebra" does not exist on GetWiki (yet)
- time: 2:33am EDT - Sat, Aug 18 2018
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
GETWIKI 19 AUG 2014
GETWIKI 18 AUG 2014
Wikinfo
Culture
CONNECT