Ch 2.2 The Inverse of a Matrix

Theorem 4 (119)

Let A = [[a b][c d]]. If ad - bc not equals to zero, then A is invertible and A^-1 = 1/(ad-bc)[[d -b] [-c a]]

If ad - bc=0, then A is not invertible.

¡¡

Ch. 4.1 Vector Spaces and Subspaces

Vector Space Definition p.217

A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called additiona and multiplication by scalars (real numbers), subject to the ten axioms (or rules) listed below. The axioms must hold for all vectors u, v, and w in V and for all scalars c and d.

1. The sum of u and v, denoted by u + v, is in V.

2. u + v = v + u

3. (u + v) + w = u + (v + w)

4. There is a zero vector 0 in V such that u + 0 = u

5. For each u in V, there is a vector ¨C u in V such that u + (-u) = 0.

6. The scalar multiple of u by c, denoted by cu, is in V.

7. c(u+v) = cu + cv.

8.(c+d)u = cu + du

9. c(du) = (cd)u.

10. 1u = u.

Subspace Definition p.220

A subspace of a vector space V is a subset H of V that has three properties:

a. The zero vector of V is in H.

b. H is closed under vector addition. That is, for each u and v in H, the sum u + v is in H.

c. H is closed under multiplication by scalars. That is, for each u in H and each scalar c, the vector cu is in H.

Thoerem 1 If v1,¡­,vp are in a vector space V, then Span{ v1,¡­,vp } is a subspace of V. p.221.

 

Ch. 4.2 Null Spaces, Column Spaces, and Linear Transformations

Thoerem 2 The null space of an m x n matrix A is a subspace of Rn. Equivalently, the set of all solutions to a system Ax=0 of m homogeneous linear equations in n unknowns is a subspace of Rn. (The null space of A is the solution set of the equation Ax = 0))p.227.

Theorem 3 The column space of an m x n matrix A is a subspace of Rm. p. 229

 

Ch. 4.3 Linear Independent Sets; Bases

Basis Definition p.238.

Let H be a subspace of a vector space. An indexed set of vetors B={b1,¡­,bp}in V is a basis for H if

(i) B is a linearly independent set, and

(ii) the subspace spanned by B coincides with H; that is H= Span{b1,¡­,bp}.

Thoerem 4 An indexed set {v1,¡­,vp} of two or more vectors, with v1¹0 is linearly dependent if and only if some vj (with j>1) is a linear combination of the preceding vectors, v1,¡­,vj-1 p.237.
({ v1,¡­,vp}, v1
¹0, v2- vlast is linear combination of something earlier.)

Thoerem 5 Spanning Set Theorem p.239

Let S = {v1,¡­,vp} be a set in V, and let H = Span{v1,¡­,vp}

a. If one of the vectors in S ¨C say, vk ¨C is a linear combination of the remaining vectors in S, then the set formed from S by removing vk still spans H.

b. If H¹{0}, some subset of S is a basis for H

(kick out vk, v1,¡­,vp still Span.)

{Col A = the set of the pivot columns by this theorem}

{If a finite set S of nonzero vectors spans a vector space H, then some subset of S is a basis for H.}

Thoerem 6 The pivot columns of a matrix A form a basis for Col A. p.241

 

Ch. 4.4 Coordinate System

Coordinates Definition

Suppose B = { b1,¡­,bn} is a basis for V and x is in V.  The coordinates of x relative to the basis B (or the B-coordinates of x) are the weights c1, ¡­, cn such that x = c1 b1+¡­+ cn bn.

Thoerem 7 The Unique Representation Theorem p.246

Let B = { b1,¡­,bn}be a basis for a vector space V. Then for each x in V, there exists a unique set of scalars c1, ¡­, cn such that

x = c1b1, ¡­, cn bn

Theorem 8

Let B={ b1,¡­,bn} be a basis for a vector space V. Then the coordinate mapping x |-> [x]B is a one-to-one linear transformation from V onto Rn.

 

Ch. 4.5 Dimension of a Vector Space

Theorem 9 p.256

If a vector space V has a basis B = { b1,¡­,bn}, then any set in V containing more than n vectors must be linearly dependent.

Theorem 10 p.257

If a vector space V has a basis of n vectors, then every basis of V must consist of exactly n vectors.

Dimension Definition p.257

If V is spanned by a finite set, then V is said to be finite-dimensional, and the dimension of V, written as dim V, is the number of vectors in a basis for V. The dimension of the zero vector space {0} is defined to be zero. If V is not spanned by a finite set, then V is said to be infinite-dimensional.

Theorem 11 p.259

Let H be a subspace of a finite-dimensional vector space V. Any linearly independent set in H can be expanded, if necessary, to a basis for H. also, H is finite-dimensional and

dim H ¡Ü dimV

Theorem 12 The Basis Theorem p.259

Let V be a p-dimensional vector space, p¡Ý1. Any linearly independent set of exactly p elements in V is automatically a basis for V. Any set of exactly p elements that spans V is automatically a basis for V.

 

Ch. 4.6 Rank

Theorem 13 p.263

If two matrices A and B are row equivalent, then their row spaces are the same. If B is in echelon form, the nonzero rows of B form a basis for the row space of A as well as for that of B.

Rank Definition p. 265

The rank of A is the dimension of the column space of A.

Theorem 14 The Rank Theorem p.265

The dimensions of the column space and the row space of an m x n matrix A are equal. This common dimension, the rank of A, also equals the number of pivot positions in A and satisfies the equation

rank A + dim Nul A = n

Theorem The Invertible Matrix Theorem (continued)

Let A be an n x n matrix. Then the following statements are each equivalent to the statement that A is an invertible matrix.

m. The columns of A from a basis of Rn.

n. Col A = Rn

o. dim Col A = n

p. rank A = n

q. Nul A = {0}

r. dim Nul A =0

 

Ch. 4.7 Change of Basis

Theorem 15 p.273

Let B = {b1,¡­, bn} and C ={c1,¡­, cn} be bases of a vector space V. Then there is a unique n x n matrix such that

[x]c = [x]B

The columns of  are the C-coordinate vectors of the vectors in the basis B. That is,

= [[b1]C  [b2]C ¡­ [bn]C]

 

Ch. 5.1 Eigenvectors and Eigenvalues

Eigenvector and Eigenvalue Definition, p. 303

An eigenvector of an n x n matrix A is a non-zero vector x such that Ax=x for some scalar . A scalar  is called an eigenvalue if there is a non-trivial solution of Ax=x.

Theorem 1 p.306

The eigenvalues of a triangular matrix are the entries on its main diagonal.

Theorem 2 p.307

If v1,¡­,vr are eigenvectors that correspond to distant eigenvalues of an n x n matrix A, then the set { v1,¡­,vr} is linearly independent.

 

Ch.5.2 The Characteristic Equation

Theorem, The Invertible Matrix Theorem (continued) p. 312

Let A be an n x n matrix. Then A is invertible if and only if:

s. The number 0 is not an eigenvalue of A.

t. The determinant of A is not zero.

Theorem 3 Properties of Determinants, p. 313

Let A and B be n x n matrices.

a. A is invertible if and only if det A ¡Ù0

b. det AB = (det A)(det B)

c. det AT = det A.

d. If A is triangular, then det A is the product of the entries on the main diagonal of A.

e. A row replacement operation on A does not change the determinant. A row interchange changes the sign of the determinant. A row scales the determinant by the same scalar factor.

The Characteristic Equation, p. 313

A scalaris an eigenvalue of an n x n matrix A if and only ifsatisfies the characteristic equations

det(A -I)=0

 

Ch. 5.3 Diagonalization

Diagonalizable Definition (p.320)

An n x n matrix A is said to be diagonalizable if it is similar to a diagonal matrix. That is A=PDP-1 for P an invertible matrix and D a diagonal matrix

Theorem 4 Properties of Determinants, p. 313

If n x n matrices A and B are similar, then they have the same characteristic polynomial and hence the same eigenvalues (with the same multiplicities).

Theorem 5 (p. 320)

An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP-1 with D diagonal if and only if the columns of P are n linearly independent eigenvectors, in that case the diagonal entries of D are the eigenvalues that correspond to each of the eigenvectors in P.

Theorem 6 p.323

An n x n matrix with n distinct eigenvalues is diagonalizable.

Theorem 7 p.324

Let A be an n x n matrix whose distinct eigenvalues are .
a. For 1¡Ü k ¡Üp, the dimension of the eigenspace for is less than or equal to the multiplicity of the eigenvalue.

b. The matrix A is diagonalizable if and only if the sum of the dimensions of the distinct eigenspaces equals n, and this happens if and only if the dimension of the eigenspace for eachequals the multiplicity of.

c. If A is diagonalizable andis a basis for the eigenspace corresponding tofor each k, then the total collection of vectors in the setsforms an eigenvector basis for Rn.

 

Ch. 5.4 Eigenvectors and Linear Transformations

Theorem 8 p.331

Suppose A = PDP-1, where D is a diagonal n x n matrix. If B is the basis for Rn formed from the columns of P, then D is the B-matrix for the transformation x->Ax.

 

Lecture

A is a m x n matrix

Col A = The subspace of Rm consisting of all linear combination of the colums of A

Nul A = The subspace of Rn consisting of all solutions Ax=0.

dim Col A = number of pivot columns of A

dim Nul A = number of free variable in Ax=0. p.260

dim Row A = number of pivot rows of A p.260

 

Find basis of Col A¨C pivot columns of A forms a basis

Find basis of Nul A ¨C solve to reduced echelon form of Ax = 0, then solve the parametric vector form.

 

dim Col A + dim Nul A = number of colums of A

Rank A + dim Nul A (¡°Nullity¡±) = n

 

Row A = Subspace of R consisting of all linear combination of the rows of A

 

dim Col A = dim Row A (because pivot columns = pivot rows)