Meteen naar document
Dit is een Premium document. Sommige documenten op Studeersnel zijn Premium. Upgrade naar Premium om toegang te krijgen.

Linear Algebra List of Theorems and Definitions

A document with all theorems, corollaries and definitions from Linear...
Vak

Lineaire algebra (2DBN00)

85 Documenten
Studenten deelden 85 documenten in dit vak
Studiejaar: 2017/2018
Geüpload door:

Reacties

inloggen of registreren om een reactie te plaatsen.

Gerelateerde Studylists

DL List

Preview tekst

Linear Algebra: Theorems, Concepts and Definitions

1 – Systems of Linear Equations

Definition Two systems of equations involving the same variables are said to be equivalent if they have the same solution set.

Definition A system is said to be in strict triangular form if, in the kth equation, the coefficients of the first k – 1 variables are all zero and the coefficient of xk is nonzero (k = 1, ...., n)

Concept Elementary Row Operations I. Interchange two rows. II. Multiply a row by a nonzero real number. III. Replace a row by its sum with a multiple of another row.

Chapter Paragraphs Exercises 1 1 – Systems of Linear Equations 2 – Row Echelon Form 3 – Matrix Arithmetic 4 – Matrix Algebra 5 – Elementary Matrices until Triangular Factorization

1 – 1ab, 2ab, 3, 4d, 5b, 6cfh, 7, 8, 9, 10, 11 1 – 1, 2abcd, 3abcd, 5hijk, 6, 7, 8, 9, 10, 12 1 – 1efgh, 2abc, 4c, 7b, 8b, 9, 10, 11 12, 13, 16, 18 1 – 1, 2, 3, 7, 8, 9, 10abcd, 11abcd, 12, 13a, 14, 16, 19, 20, 21, 28, 30 1 – 1, 2, 3, 4, 6, 8ac, 10gh, 12, 15, 18 2 1 – The Determinant of a Matrix 2 – Properties of Determinants

2 – 3efgh, 4 2 – 1, 2, 3f, 4, 5, 6, 7, 9, 11, 12, 13 3 1 – Definition and Examples 2 – Subspaces 3 – Linear Independence 4 – Basis and Dimension until The Vector Space C(n-1)[a,b] 5 – Change of Basis until Example 5 6 – Row Space and Column Space

3 – 3, 7, 8, 9 3 – 1abc, 2, 3defg, 4cd, 5, 6abc, 8, 9, 12abc, 13, 14, 19, 24, 25 3 – 2abc, 3abc, 4, 6, 7, 8, 9, 10, 12, 14, 15, 16, 17 ,19, 20 3 – 5, 7, 8, 9, 11, 13, 14, 15 3 – 1bc, 4, 5, 7, 10, 11 3 – 1ab, 2ab, 3, 4def, 6, 7, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 22, 27, 28 5 1 – The Scalar Product in Rn 2 – Orthogonal Subspaces 3 – Least Squares Problem 4 – Inner Product Spaces 5 – Orthonormal Sets until Approximation of Functions 6 – The Gram-Schmidt Orthogonalization Procedure

5 – 1, 2, 3d, 5, 6, 8ab, 9, 10, 11, 13, 15 5 – 1cd, 2, 3, 5, 6, 7, 9, 13, 15, 16 5 – 1bc, 2bc, 3, 4, 5, 6, 7, 9, 11, 14 5 – 2, 4, 7, 8, 13, 14, 15a, 17, 18, 20, 23, 25, 28, 29, 30, 33 5 – 1cd, 2, 5, 6, 7, 8, 11, 12, 14, 15, 19, 21, 22, 23, 26, 28, 30 5 – 1, 2, 3, 4, 5, 8, 9, 12 6 1 – Eigenvalues and Eigenvectors without The Product and Sum of the Eigenvalues 3 – Diagonalization until Application 1: Markov Chains 4 – Hermitian Matrices until Normal Matrices

6 – 1ghij, 2, 3, 4, 6, 9, 11, 12, 13, 14, 16, 18, 21, 26, 29, 30, 32, 33 6 – 1def, 2def, 3def, 6, 8aceg, 9, 11, 17, 18, 31, 32ab, 33, 35 6 – 1b, 2, 3, 4ef, 5defg, 7, 12, 13, 16, 17, 22, 26, 27

1 – Row Echelon Form

Concept Lead variables are the variables corresponding to the first nonzero elements in each row of the reduced matrix Free variables are the remaining variables.

Definition A matrix is said to be in row echelon form if

(i) The first nonzero entry in each nonzero row is 1. (ii) If row k does not consist entirely of zeroes, the number of leading zero entries in row k + 1 is greater than the number of leading zero entries in row k. (iii) If there are rows whose entries are all zero, they are below the rows having nonzero entries.

Definition The process of using row operations I, II and III to transform a linear system into one whose augmented matrix is in row echelon form is called Gaussian elimination.

Concept A system is said to be overdetermined if there are more equations than unknowns. These systems are usually (but not always) inconsistent. A system is said to be underdetermined if there are fewer equations than unknowns (m < n). These systems are usually (but not always) consistent with infinitely many solutions. These systems cannot have a single unique solution.

Definition A matrix is said to be in reduced row echelon form if

(i) The matrix is in row echelon form (ii) The first nonzero entry in each row is the only nonzero entry in its column

Concept A system of linear equations is said to be homogeneous if the constants on the right-hand side are all zero. These systems are always consistent.

Theorem An m x n homogenous system of linear equations has a nontrivial solution if n > m.

1 – Matrix Arithmetic

Concept The entries of a matrix are called scalars, and they can either be real or complex numbers.

Definition Two m x n matrices A and B are said to be equal if aij = bij for each i and j.

Definition If A is an m x n matrix and α is a scalar, then αA is the m x n matrix whose (i, j) entry is αaij.

Definition If A = (aij) and B = (bij) are both m x n marices, then the sum A + B is the m x n matrix whose (i, j) entry is aij + bij for each ordered pair (i, j).

Definition An n x n matrix A is said to be nonsingular or invertible if there exists a matrix B such that AB = BA = I. The matrix B is said to be a multiplicative inverse of A. A matrix can have at most one multiplicative inverse. The multiplicative inverse of a matrix A is denoted A-1. Only square matrices can have a multiplicative inverse.

Definition An n x n matrix is said to be singular if it does not have a multiplicative inverse.

Theorem If A and B are nonsingular n x n matrices, then AB is also nonsingular and (AB)-1 = B-1A-1.

Concept Algebraic rules for transposes:

  1. (AT)T = A
  2. (αA)T = αAT
  3. (A + B)T = AT + BT
  4. (AB)T = BTAT

1 – Elementary Matrices

Concept If exactly one elementary row operation is applied to the identity matrix, the resulting matrix is called an elementary matrix. Three types of elementary matrices:

I An elementary matrix of type I is obtained by interchanging two rows of I II An elementary matrix of type II is obtained by multiplying a row of I by a nonzero constant III An elementary matrix of type III is obtained from I by adding a multiple of one row to another row.

If A is an n x r matrix and E is an n x n elementary matrix, premultiplying A by E has the effect of performing that same row operation on A. If B is an m x n matrix, postmultiplying B by E is equivalent to performing that same column operation on B.

Theorem If E is an elementary matrix, then E is nonsingular and E-1 is an elementary matrix of the same type.

Definition A matrix B is row equivalent to a matrix A if there exists a finite sequence of E 1 , E 2 ,...,Ek of elementary matrices such that B = EkEk-1.. 1 A

Theorem Let A be an n x n matrix. The following are equivalent:

(a) A is nonsingular (b) Ax = 0 has only the trivial solution 0 (c) A is row equivalent to I

Corollary The system Ax = b of n linear equations in n unknowns has a unique solution if and only if A is nonsingular.

Concept An n x n matrix is said to be upper triangular if aij = 0 for i > j and lower triangular if aij = 0 for i < j. Also, A is said to be triangular if it is either upper or lower triangular. An n x n matrix is diagonal if aij = 0 whenever i ≠ j.

2 – The Determinant of a Matrix

Definition Let A = (aij) be an n x n matric and let Mij denote the (n-1) x (n-1) matrix obtained from A by deleting the row and column containing aij. The determinant of Mij is called the minor of aij. We define the cofactor Aij of aij by

Aij = (-1)i + j det(Mij)

Definition The determinant of an n x n matrix A, denoted det(A), is a scalar associated with the matrix A that is defined inductively as

det(A) = �끫뢜 11

끫뢬끫뢬 끫뢶 = 1

끫뢜 11 끫롨 11 + 끫뢜 12 끫롨 12 + ⋯ + 끫뢜1끫뢶끫롨1끫뢶 끫뢬끫뢬 끫뢶 > 1

where

끫롨1끫뢮 = (−1)1+끫뢮 det(끫뢀1끫뢮) 끫뢮 = 1, ... , 끫뢶

are the cofactors associated with the entries in the first row of A.

Theorem If A is an n x n matrix with n ≥ 2, then det(A) can be expressed as a cofactor expansion using any row or column of A.

Theorem If A is an n x n matrix, then det(AT) = det(A).

Theorem If A is an n x n triangular matrix, then the determinant of A equals the product of the diagonal elements of A.

Theorem Let A be an n x n matrix.

(i) If A has a row or column consisting entirely of zeroes, then det(A) = 0. (ii) If A has two identical rows or two identical columns, then det(A) = 0.

A6. (α + β)x = α(βx) for any scalars α and β and any x ∈ V A7. (αβ)x = α(βx) for any scalars α and β and any x ∈ V A8. 1x = x for all x ∈ V

Concept Let C[a, b] denote the set of all real-valued functions that are defined and continuous on the closed interval [a, b].

Concept Let Pn denote the set of all polynomials of degree less than n.

Theorem If V is a vector space and x is any element of V, then

(i) 0 x = 0 (ii) x + y = 0 implies that y = -x (i., the additive inverse of x is unique) (iii) (-1)x = -x

3 – Subspaces

Definition If S is a nonempty subset of a vector space V, and S satisfies the conditions

(i) αx ∈ S whenever x ∈ S for any scalar α (ii) x + y ∈ S whenever x ∈ S and y ∈ S

then S is said to be a subspace of V. A subset of V, then, is a subset S that is closed under the operations of V. Every subspace of a vector space is a vector space in its own right.

In a vector space V, it can be readily verified that { 0 } and V are subspaces of V. All other subspaces are referred to as proper subspaces. The { 0 } is referred to as the zero subspace. To show that a subset S of a vector space forms a subspace, it must be shown that S is nonempty and that the closure properties (i) and (ii) are satisfied. Since every subspace must contain the zero vector, we can verify that S is nonempty by showing that 0 ∈ S.

Definition Let A be an m x n matrix. Let N(A) denote the set of all solutions to the homogenous system Ax = 0 , thus:

끫뢂(끫롨) = {끫룊 ∈ ℝ끫뢶|끫롨끫뤲 = 끫뾜}

The set of all solutions of the homogenous system Ax = 0 forms a subspace of ℝ끫뢶. The subspace N(A) is called the null space of A.

Definition Let v 1 , v 2 ,... be vectors in a vector space V. A sum of the form α 1 v 1 + α 2 v 2 + ....+ αnvn, where α 1 ,....,αn are scalars, is called a linear combination of v 1 , v 2 ,.... The set of all linear combinations of v 1 , v 2 ,... is called the span of v 1 , v 2 ,.... The span of v 1 , v 2 ,... will be denoted Span(v 1 , v 2 ,...).

Theorem If v 1 , v 2 ,... are elements of a vector space V, then Span(v 1 , v 2 ,...) is a subspace of V.

Definition The set { v 1 , v 2 ,...} is a spanning set for V if and only if every vector in V can be written as a linear combination of v 1 , v 2 ,....

Theorem If the linear system Ax = b is consistent and x 0 is a particular solution, then a vector y will also be a solution if and only if y = x 0 + z where z ∈ N(A).

3 – Linear Independence

Concept I If v 1 , v 2 ,... span a vector space V and one of these vectors can be written as a linear combination of the other n – 1 vectors, then those n – 1 vectors span V. II Given n vectors v 1 , v 2 ,..., it is possible to write one of the vectors as a linear combination of the other n – 1 vectors if and only if there exist scalars c 1 , c 2 ,..., not all zero, such that

c 1 v 1 + c 2 v 2 + ....+ cnvn = 0

Definition The vectors v 1 , v 2 ,... in a vector space V are said to be linearly independent if

c 1 v 1 + c 2 v 2 + ....+ cnvn = 0

implies that all the scalars c 1 , c 2 ,... must equal 0. The vectors v 1 , v 2 ,... are said to be linearly dependent if there exist scalars c 1 , c 2 ,..., not all zero, such that

c 1 v 1 + c 2 v 2 + ....+ cnvn = 0

Theorem Let x 1 , x 2 ,... be n vectors in ℝ끫뢶 and let X = (x 1 , x 2 ,...). The vectors x 1 , x 2 ,... will be linearly dependent if and only if X is singular.

Theorem Let v 1 , v 2 ,... be vectors in a vector space V. A vector v ∈ Span(v 1 , v 2 ,...) can be written uniquely as a linear combination of v 1 , v 2 ,... if and only if v 1 , v 2 ,... are linearly independent.

3 – Basis and Dimension

Definition The vectors v 1 , v 2 ,... form a basis for a vector space V if and only if

(i) v 1 , v 2 ,... are linearly independent (ii) v 1 , v 2 ,... span V

Theorem If { v 1 , v 2 ,... } is a spanning set for a vector space V, then any collection of m vectors in V, where m > n, is linearly dependent.

Theorem Let A be an m x n matrix. The linear system Ax = b is consistent for every b ∈ ℝ끫뢴 if and only if the column vectors of A span ℝ끫뢴. The system Ax = b has at most one solution for every b ∈ ℝ끫뢴 if and only if the column vectors of A are linearly independent.

Corollary An n x n matrix A is nonsingular if and only if the column vectors of A form a basis for ℝ끫뢶.

Concept The dimension of the null space of a matrix is called the nullity of a matrix.

Theorem If A is an m x n matrix, then the rank of A plus the nullity of A equals n.

Theorem If A is an m x n matrix, the dimension of the row space of A equals the dimension of the column space of A.

5 - The Scalar Product in Rn

Definition Let x and y be vectors in either ℝ 2 or ℝ 3. The distance between x and y is defined to be the number ‖끫뤲��⃗ − 끫뤴��⃗ ‖.

Theorem If x and y are two nonzero vectors in either ℝ 2 or ℝ 3 and 끫븆 is the angle between them, then xTy = ‖끫뤲��⃗ ‖‖끫뤴��⃗ ‖ cos 끫븆.

Corollary The Cauchy-Schwarz Inequality states that if x and y are vectors in either ℝ 2 or ℝ 3 , then

|끫뤲끫뢎끫뤴| ≤ ‖끫뤲��⃗ ‖‖끫뤴��⃗ ‖

with equality holding if and only if one of the vectors is 0 or one vector is a multiple of the other.

Definition The vectors x and y in ℝ 2 or ℝ 3 are said to be orthogonal if xTy = 0.

Concept Scalar projection of x onto y:

끫뷸 =

끫뤲��⃗ 끫뢎끫뤴��⃗

‖끫뤴��⃗ ‖

Vector projection of x onto y:

p = αu = α 끫뤴��⃗ ‖끫뤴��⃗ ‖ =

끫뤲��⃗ 끫뢎끫뤴��⃗ 끫뤴��⃗ 끫뢎끫뤴��⃗ 끫뤴

5 – Orthogonal Subspaces

Definition Two subspaces X and Y of ℝ끫뢶 are said to be orthogonal if xTy = 0 for every x ∈ 끫뢖 and every y ∈ 끫뢘. If X and Y are orthogonal, we write 끫뢖 ⊥ 끫뢘.

Definition Let Y be a subspace of ℝ끫뢶. The set of all vectors in ℝ끫뢶 that are orthogonal to every vector in Y will be denoted 끫뢘⊥. Thus,

끫뢘⊥ = {끫롢 ∈ ℝ끫뢶� 끫뤲끫룶끫롤 = 0 for every 끫롤 ∈ 끫뢘 }

The set 끫뢘⊥ is called the orthogonal complement of Y. If X and Y are orthogonal subspaces of ℝ끫뢶, then 끫뢖 ∩ 끫뢘 = {끫뾜}. If Y is a subspace of ℝ끫뢶, then 끫뢘⊥ is also a subspace of ℝ끫뢶.

Theorem If A is an m x n matrix, then N(A) = 끫뢊(끫롨끫뢎)⊥ and N(AT) = 끫뢊(끫롨)⊥.

Theorem If S is a subspace of ℝ끫뢶, then dim 끫뢌 + dim 끫뢌⊥ = n. Furthermore, if { x 1 , x 2 ,... } is a basis for S and { xr+1,....,xn } is a basis for 끫뢌⊥, then { x 1 , x 2 ,..., xr+1,....,xn } is a basis for ℝ끫뢶.

Definition If U and V are subspaces of a vector space W and each 끫뤰 ∈ 끫뢔 can be written uniquely as a sum u + v, where 끫뤬 ∈ 끫뢐 and 끫뤮 ∈ 끫뢒, then we say that W is a direct sum of U and V, and we write 끫뢔 = 끫뢐 ⨁ 끫뢒.

Theorem If S is a subspace ℝ끫뢶, then

ℝ끫뢶 = 끫뢌 ⨁ 끫뢌⊥

Theorem If S is a subspace ℝ끫뢶, then

(끫뢌⊥)⊥ = 끫뢌

5 – Least Squares Problems

Theorem Let S be a subspace of ℝ끫뢴. For each b ∈ ℝ끫뢴, there is a unique element p of S that is closest to b, that is,

‖끫뤆 − 끫뤴‖ > ‖끫뤆 − 끫뤢‖

for any y ≠ p in S. Furthermore, a given vector p in S will be closest to a given vector b ∈ ℝ끫뢴 if and only if b – p ∈ 끫뢌⊥.

Theorem If A is an m x n matrix of rank n, the normal equations ATAx = ATb have a unique solution 끫뤲� = (ATA)-1ATb and 끫뤲� is the unique least squares solution of the system Ax = b.

Definition A vector space V is said to be a normed linear space if, to each vector v ∈ 끫뢒, there is associated a real number ‖끫롞‖, called the norm of v, satisfying

I ‖끫롞‖ ≥ 0 with equality if and only if v = 0 II ‖α끫롞‖ = |끫뷸|‖끫롞‖ for any scalar α III ‖끫롞 + 끫론‖ ≤ ‖끫롞‖ + ‖끫론‖ for all v, w ∈ 끫뢒

Theorem If V is an inner product space, then the equation

‖끫뤮‖ = �〈끫뤮, 끫뤮〉 for all v ∈ 끫뢒

defines a norm on V.

Definition Let x and y be vectors in a normed linear space. The distance between x and y is defined to be the number ‖끫롤 − 끫롢‖.

5 – Orthonormal Sets

Definition Let v 1 , v 2 ,... be nonzero vectors in an inner product space V. If 〈끫뤮끫뤔, 끫뤮끫뤖〉 = 0 whenever i ≠ j, then { v 1 , v 2 ,... } is said to be an orthogonal set of vectors.

Theorem If { v 1 , v 2 ,... } is an orthogonal set of nonzero vectors in an inner

product space V, then v 1 , v 2 ,... are linearly independent.

Definition An orthonormal set of vectors is an orthogonal set of unit vectors.

Theorem Let { u 1 , u 2 ,... } be an orthonormal basis for an inner product space V. If v = ∑ 끫뢶끫뢬=1 끫뢠끫뢬끫뤬끫뢬then ci = 〈끫뤮, 끫뤬끫뤔〉.

Corollary Let { u 1 , u 2 ,... } be an orthonormal basis for an inner product space V. If u = ∑ 끫뢶끫뢬=1 끫뢜끫뢬끫뤬끫뢬and v = ∑ 끫뢶끫뢬=1 끫뢞끫뢬끫뤬끫뢬, then 〈끫뤬, 끫뤮〉 = ∑ 끫뢶끫뢬=1 끫뢜끫뢬끫뢞끫뢬.

Corollary If { u 1 , u 2 ,... } is an orthonormal basis for an inner product space V and v = ∑ 끫뢶끫뢬=1 끫뢠끫뢬끫뤬끫뢬, then

‖끫뤮‖ 2 = ∑ 끫뢶끫뢬=1끫뢠끫뢬

Definition An n x n matrix Q is said to be an orthogonal matrix if the column

vectors of Q form an orthonormal set in ℝ끫뢶.

Theorem An n x n matrix Q is orthogonal if and only if QTQ = I.

Concept If Q is an n x n orthogonal matrix, then

I The column vectors of Q form an orthonormal basis for ℝ끫뢶 II QTQ = I III QT = Q- IV 〈끫뢈끫뤲, 끫뢈끫뤴〉 = 〈끫뤲, 끫뤴〉 V ‖끫뢈끫뤲‖ 2 = ‖끫뤲‖ 2

Concept A permutation matrix is a matrix formed from the identity matrix by reordering its columns. Permutation matrices are orthogonal matrices.

Theorem If the column vectors of A form an orthonormal set of vectors in ℝ끫뢴, then ATA = I and the solution to the least squares problem is 끫뤲� = ATb.

Theorem Let S be a subspace of an inner product space V and let x ∈ 끫뢒. Let { u 1 , u 2 ,... } be an orthonormal basis for S. If

p = ∑ 끫뢶끫뢬=1끫뢠끫뢬끫뤬끫뢬

where

ci = 〈끫뤲, 끫뤬끫뤔〉 for each i

then p - x ∈ 끫뢌⊥.

Theorem Under the last theorem, p is the element of S that is closest to x, that is

‖끫뤴 − 끫뤲‖ > ‖끫뤢 − 끫뤲‖

for any y ≠ p in S.

Corollary Let S be a nonzero subspace of ℝ끫뢴 and let b ∈ ℝ끫뢴. If { u 1 , u 2 ,... } is an orthonormal basis for S and U = (u 1 , u 2 ,...), then the projection p of b onto S is given by p = UUTb.

Concept Let A be an n x n matrix and λ be a scalar. The following statements are equivalent:

I λ is an eigenvalue of A II (A – λI)x = 0 has a nontrivial solution III N(A – λI) ≠ { 0 } IV A – λI is singular V det(A – λI) = 0

Theorem Let A and B be n x n matrices. If B is similar to A, then the two matrices have the same characteristic polynomial and, consequently, the same eigenvalues.

6 – Diagonalization

Theorem If λ 1 , λ 2 ,..., λk are distinct eigenvalues of an n x n matrix A with corresponding eigenvectors x 1 , x 2 ,..., then x 1 , x 2 ,... are linearly independent.

Definition An n x n matrix is said to be diagonalizable if there exists a nonsingular matrix X and a diagonal matrix D such that X-1AX = D. We say that X diagonalizes A.

Theorem An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. If A is diagonalizable, then the column vectors of the diagonalizing matrix X are eigenvectors of A and the diagonal elements of D are the corresponding eigenvalues of A. The diagonalizing matrix X is not unique. Reordering the columns of a given X or multiplying them by nonzero scalars will produce a new diagonalizing matrix. If A is n x n an A has n distinct eigenvalues, then A is diagonalizable. If the eigenvalues are not distinct, then A may or may not be diagonalizable depending on whether A has n linearly independent eigenvectors. If A is diagonalizable, then A can be factored into a product XDX-1.

6 – Hermitian Matrices

Concept For complex inner products, 끫뤶 끫뢎 = 끫뤶끫롶 and ‖끫뤶‖ = √끫뤶끫롶끫뤶

Definition Let V be a vector space over the complex numbers. An inner product on V is an operation that assigns to each pair of vectors z and w in V a complex number 〈끫뤶, 끫뤰〉 satisfying the following conditions.

I 〈끫뤶, 끫뤶〉 ≥ 0, with equality if and only if z = 0 II 〈끫뤶, 끫뤰〉 = 〈끫뤰, 끫뤶〉 for all z and w in V III 〈끫뷸끫뤶 + β끫뤰, 끫뤬〉 = 끫뷸〈끫뤶, 끫뤬〉 + 끫뷺〈끫뤰, 끫뤬〉

Concept If A and B are elements of ℂ끫뢴×끫뢶 and 끫롬 ∈ ℂ끫뢶×끫뢾, then the following rules apply:

I (끫롨끫롶)끫롶 = 끫롨 II (끫뷸끫롨 + 끫뷺끫롪)끫롶 = 끫뷸끫롨끫롶 + 끫뷺끫롪끫롶 III (끫롨끫롬)끫롶 = 끫롬끫롶끫롨끫롶

Definition A matrix M is said to be Hermitian if M = MH.

Theorem The eigenvalues of a Hermitian matrix are all real. Furthermore, eigenvectors belonging to distinct eigenvalues are orthogonal.

Definition An n x n matrix U is said to be unitary if its column vectors form an orthonormal set in ℂ끫뢶. Thus, U is unitary if and only if UHU = I. If U is unitary, then, since the column vectors are orthonormal, U must have rank n. It follows that U-1 = UH.

Corollary If the eigenvalues of a Hermitian matrix A are distinct, then there exists a unitary matrix U that diagonalizes A.

Theorem According to Schur’s theorem: for each n x n matrix A, there exists a unitary matrix U such that UHAU is upper triangular. factorization A = UTUH is often referred to as the Schur decomposition of A. In the case that A is Hermitian, T will be diagonal. If A is a real n x n matrix, A = QTQT where Q is an orthogonal matrix and T is a real matrix. The matrix T is referred to as the real Schur form of A.

Theorem If A is Hermitian, then there exists a unitary matrix U that diagonalizes A.

Definition A subspace S of ℝ끫뢶 is said to be invariant under a matrix A if, for each x ∈ 끫뢌, 끫롨끫롢 ∈ 끫뢌.

Theorem If A is an n x n matrix with real entries, then A can be factored into a product QTQT, where Q is an orthogonal matrix and T is in the real Schur form.

Corollary If A is a real symmetric matrix, then there is an orthogonal matrix Q that diagonalizes A, that is QTAQ = D, where D is diagonal.

Was dit document nuttig?
Dit is een Premium document. Sommige documenten op Studeersnel zijn Premium. Upgrade naar Premium om toegang te krijgen.

Linear Algebra List of Theorems and Definitions

Vak: Lineaire algebra (2DBN00)

85 Documenten
Studenten deelden 85 documenten in dit vak
Was dit document nuttig?

Dit is een preview

Wil je onbeperkt toegang? Word Premium en krijg toegang tot alle 17 pagina's
  • Toegang tot alle documenten

  • Onbeperkt downloaden

  • Hogere cijfers halen

Uploaden

Deel jouw documenten voor gratis toegang

Ben je al Premium?
Linear Algebra: Theorems, Concepts and Definitions
1.1 Systems of Linear Equations
Definition Two systems of equations involving the same variables are said to be
equivalent if they have the same solution set.
Definition A system is said to be in strict triangular form if, in the kth equation, the
coefficients of the first k1 variables are all zero and the coefficient of xk is
nonzero (k = 1, …., n)
Concept Elementary Row Operations
I. Interchange two rows.
II. Multiply a row by a nonzero real number.
III. Replace a row by its sum with a multiple of another row.
Chapter
Paragraphs
Exercises
1
1 Systems of Linear Equations
2 Row Echelon Form
3 – Matrix Arithmetic
4 Matrix Algebra
5 Elementary Matrices until Triangular Factorization
1.1 1ab, 2ab, 3, 4d, 5b, 6cfh, 7, 8, 9, 10, 11
1.2 1, 2abcd, 3abcd, 5hijk, 6, 7, 8, 9, 10, 12
1.3 1efgh, 2abc, 4c, 7b, 8b, 9, 10, 11 12, 13,
16, 18
1.4 1, 2, 3, 7, 8, 9, 10abcd, 11abcd, 12, 13a,
14, 16, 19, 20, 21, 28, 30
1.5 1, 2, 3, 4, 6, 8ac, 10gh, 12, 15, 18
2
1 The Determinant of a Matrix
2 Properties of Determinants
2.1 3efgh, 4
2.2 – 1, 2, 3f, 4, 5, 6, 7, 9, 11, 12, 13
3
1 Definition and Examples
2 – Subspaces
3 Linear Independence
4 Basis and Dimension until The Vector Space C(n-1)[a,b]
5 Change of Basis until Example 5
6 Row Space and Column Space
3.1 – 3, 7, 8, 9
3.2 1abc, 2, 3defg, 4cd, 5, 6abc, 8, 9, 12abc,
13, 14, 19, 24, 25
3.3 2abc, 3abc, 4, 6, 7, 8, 9, 10, 12, 14, 15, 16,
17 ,19, 20
3.4 5, 7, 8, 9, 11, 13, 14, 15
3.5 1bc, 4, 5, 7, 10, 11
3.6 1ab, 2ab, 3, 4def, 6, 7, 9, 10, 11, 13, 14, 15,
16, 17, 18, 19, 20, 22, 27, 28
5
1 The Scalar Product in Rn
2 Orthogonal Subspaces
3 Least Squares Problem
4 Inner Product Spaces
5 Orthonormal Sets until Approximation of Functions
6 The Gram-Schmidt Orthogonalization Procedure
5.1 1, 2, 3d, 5, 6, 8ab, 9, 10, 11, 13, 15
5.2 1cd, 2, 3, 5, 6, 7, 9, 13, 15, 16
5.3 1bc, 2bc, 3, 4, 5, 6, 7, 9, 11, 14
5.4 2, 4, 7, 8, 13, 14, 15a, 17, 18, 20, 23, 25,
28, 29, 30, 33
5.5 1cd, 2, 5, 6, 7, 8, 11, 12, 14, 15, 19, 21, 22,
23, 26, 28, 30
5.6 1, 2, 3, 4, 5, 8, 9, 12
6
1 Eigenvalues and Eigenvectors without The Product and
Sum of the Eigenvalues
3 Diagonalization until Application 1: Markov Chains
4 Hermitian Matrices until Normal Matrices
6.1 – 1ghij, 2, 3, 4, 6, 9, 11, 12, 13, 14, 16, 18,
21, 26, 29, 30, 32, 33
6.3 1def, 2def, 3def, 6, 8aceg, 9, 11, 17, 18, 31,
32ab, 33, 35
6.4 1b, 2, 3, 4ef, 5defg, 7, 12, 13, 16, 17, 22,
26, 27

Waarom is deze pagina onscherp?

Dit is een Premium document. Word Premium om het volledige document te kunnen lezen.

Waarom is deze pagina onscherp?

Dit is een Premium document. Word Premium om het volledige document te kunnen lezen.

Waarom is deze pagina onscherp?

Dit is een Premium document. Word Premium om het volledige document te kunnen lezen.

Waarom is deze pagina onscherp?

Dit is een Premium document. Word Premium om het volledige document te kunnen lezen.

Waarom is deze pagina onscherp?

Dit is een Premium document. Word Premium om het volledige document te kunnen lezen.