Chapter 3, Section A

abelian

March 11, 2024

This chapter is exciting because we finally get to one of the theorems at the heart of linear algebra: the rank-nullity theorem.

Theorem 0.1 (Rank-nullity theorem). Given a matrix A, the dimension of the nullspace of A (the nullity) plus the dimension of the column space of A (the rank) equals the dimension of the row space of A.

dimKer (A )+ dim Im (A ) = dim Dom (A )

Intuitively, this makes sense if we look at matrices from the point of view of them being linear transformations. Sometimes, multiplication of a vector ⃗x by a matrix A is written as the function μA⃗x. One of the superpowers of matrices is that they can encode linear transformations of arbitrarily many parameters, making them useful in just about any applied math situation you can think of.

The rank-nullity theorem is stating that the dimension of the domain of A (i.e., the dimension of the space that the linear transformation’s inputs live in) is the dimension of the space that these inputs are sent to by the linear transformation, plus the dimension of the space of the vectors that get sent to 0 by the transformation, if any. (This is the nullspace, or the kernel of A, which may be between anything from the trivial {⃗ }
  0 to the entire input space itself.)

Put even more succinctly, it makes sense that we get what we had originally (pre-transformation) when we add together what we get post-transformation and the vectors that are “lost” during the transformation.

2

Suppose b,c . Define T : P() 2 by

                            ∫ 2
Tp = (3p(4)+ 5p′(6)+ bp(1)p(2),   x3p(x)dx+ csin p(0)).
                             -1

Show that T is linear if and only if b = c = 0.

=⇒ : Assume that T is a linear map. Then the properties of additivity and homogeneity hold:

So take p,q P(). Then:

T(p + q) = Tp + Tq
= (3p(4) + 5p(6) + bp(1)p(2), -12x3p(x)dx + csinp(0))
+ (3q(4) + 5q(6) + bq(1)q(2), -12x3q(x)dx + csinq(0))
= (3(p(4) + q(4)) + 5(p(6) + q(6)) + b(p(1)p(2) + q(1)q(2)),
-12x3p(x)dx + -12x3q(x)dx + csinp(0) + csinq(0)).

By the linearity of integration, we know that -12x3p(x)dx+ -12x3q(x)dx is always linear. But, the term csinp(0) + csinq(0) is not linear; sinp(0) + sinq(0)sinp(0) + q(0). Therefore, it must be the case that c = 0.

Take p P(). Then:

T(λp) = λTp
= (3λp(4) + 5λp(6) + 2p(1)p(2) -12x3p(x)dx + csinλp(0))

In the term 2p(1)p(2), it can be seen that λ-→λ2, so b must equal 0. Also, we know that csinλp(0)λcsinp(0) so c = 0, but this was already shown before.

⇐= : Assume that b,c = 0. Then the map T can simply be written:

              ′   ∫ 2 3
Tp = (3p(4)+ 5p (6), -1x p(x)dx)

Let’s prove additivity and homogeneity, as this suffices to show that T is a linear map. Let p,q P():

T(p + q) = Tp + Tq
= (3p(4) + 5p(6), -12x3p(x)dx) + (3q(4) + 5q(6), -12x3q(x)dx)
= (3(p(4) + q(4)) + 5(p(6) + q(6)), -12x3p(x)dx + -12x3q(x)dx)

By linearity of integrals, additivity is proven.

For homogeneity, take p P():

T(λp) = λTp
= (3p(λ4) + 5p(λ6), -12x3p(λx)dx)
= (3λp(4) + 5λp(6) -12x3p(x)dx)
= λ(3p(4) + 5p(6), -12x3p(x)dx)

Both directions have been proven.

4

Suppose T L(V,W) and ⃗v1,,⃗vm is a list of vectors in V such that T(⃗v1,,⃗vm) is a linearly independent list in W. Prove that ⃗v1,,⃗vm is linearly independent.

We wish to show that if

c1⃗v1 + ⋅⋅⋅+cm ⃗vm = ⃗0,

then it must be the case that c1 = ⋅⋅⋅ = cm = 0.

We know by additivity that

T (⃗v1,...,⃗vm ) = T⃗v1,...,T ⃗vm

These terms form a linearly independent list, so a linear combination

a1T⃗v1 + ⋅⋅⋅+ amT⃗vm = ⃗0

means that a1,,am all must equal 0. But by linearity this can be written:

T(a1⃗v1 + ⋅⋅⋅+am ⃗vm) =⃗0

and since we know any linear map T takes 0 to 0, we can assume

                  ⃗
a1⃗v1 + ⋅⋅⋅+ am⃗vm = 0

and we know that a1 = ⋅⋅⋅ = am = 0, which means that ⃗v1,,⃗vm is linearly independent.

6

Prove that multiplication of linear maps has the associative, identity, and distributive properties asserted in 3.8.

Suppose T1 L(X,Y ), T2 L(W,X), and T3 L(V,W). Then they all have the additivity and homogenity properties, and they all map 0 to 0.

We prove associativity. Take x X,y Y,w W,z Z:

(T1T2)T3v = (T1T2)w = T1x = y

T1(T2T3)v = T1(T2w) = T1x = y

∴ (T1T2)T3 = T1(T2T3)

We prove identity. Take T L(U,V ) and u U and v V :

Tu = v

ITu = Iv = v

TIu = Tu = v

∴ ITu = TIu = v

The identity mapping is the linear map that takes every element of an arbitrary vector space X to itself.

We prove the distributive property. Suppose T,T1,T2 L(U,V ) and D,D1,D2 L(V,W). Take elements u U and v,v1,v2 V :

(D1 + D2)Tu = (D1 + D2)v
= D1v + D2v
D1Tu + D2Tu = D1v + D2u

∴ (D1 + D2 )T u = D1T u + D2Tu

D(T1u + T2u) = D(v1 + v2)
= Dv1 + Dv2
= w1 + w2
DT1u + DT2u = Dv1 + Dv2
= w1 + w2

∴ D(T1u+ T2u) = DT1u + DT2u

8

Give an example of a function ϕ : 2 such that

ϕ(av) = aϕ(v)

for all a and all v 2 but ϕ is not linear.

Define ϕ to be ϕ(⃗v) = ∘ -------
  v21 + v22:

ϕ(a⃗v) = ∘ (av1)2 +-(av2)2
= a∘ -------
  v21 + v22
(⃗v) = a∘ -------
  v21 + v22

9

Give an example of a function ϕ : such that

ϕ(w+ z) = ϕ(w )+ ϕ(z)

for all w,z but ϕ is not linear.

Aside: Problems 9 and 10 together show that checking for homogeneity and additivity are not enough to show that a map is linear.

Let ϕ : be the function (where w,z ) ϕ : w↦→w. That is, it sends all elements of its domain to their complex conjugates. This is not a linear function (rather, it is antilinear because it is not homogenous. We show its additivity, nonhomogeneity, and therefore its nonlinearity below.)

ϕ(w + z) = w + z
= (a + bi + c + di)
= (a + c) + (b + d)i
= (a + c) - (b + d)i

ϕ(w) + ϕ(z) = w + z
= a + bi + c + di
= a - bi + c - di
= (a + c) + (-b - d)i
= (a + c) - (b + d)i

∴ ϕ(w+ z) = ϕ (w )+ ϕ(z)

ϕ(λw) = λw
= (c + di)(a + bi)
= ac + adi + bci + dbi2
= (ac - db) + (ad + bc)i
= (ac - db) + (-ad + -bc)i
= -bci - adi - db + ac
= (c - di)(a - bi)
= λw

          --
∴ ϕ (λw ) ⁄= λw

This is also called conjugate homogeneity.

10

Prove or give a counterexample: If q P() and T : P() P() is defined by Tp = q p, then T is a linear map.

This is saying that if T is some transformation mapping the set of all real polynomials to itself, and is defined by the composition of its input with some polynomial q, then T is a linear map.

We check for additivity and homogeneity, as usual:

Additivity: Let r,s P(). Then, T(r + s) = q (r + s). Is this equal to T(r) + T(s)? We check: T(r) + T(s) = q r + q s.

For a concrete example, let r = x3, s = x, and q = x2. Then, T(r + s) = (x3 + x)2, and T(r) + T(s) = (x3)2 + x2. Clearly, T(r + s) and T(r) + T(s) is not equal. So, T cannot be a linear map.

12

Suppose U is a subspace of V with UV . Suppose S L(U,W) and S0 (which means that for some u U, Su0). Define T : V W by:

     {
       Sv  if u ∈ U
T v =  0   if v ∈ V and v ∕∈ U.

Prove that T is not a linear map on V .

Proof. Say we take an element v V , and v∈∕U, and another element w V U.

Then, v+w V and v+w∕∈U, so T(v+w) = 0. But, Tv = 0 and Tw = Sw, so Tv +Tw0. Since additivity does not hold, T cannot be a linear map on V . __

14

Suppose V is finite-dimensional with dimV > 0, and suppose W is infinite-dimensional. Prove that L(V,W) is infinite-dimensional.

Proof. Suppose by contradiction that L(V,W) is finite-dimensional. That is, the basis of L(V,W) is of finite length. By the linear map lemma, we know that given a basis v1,,vn of V and w1,,wn W, there is a unique linear map T : V W such that:

T vk = wk,

and that L(V,W) is the set of all such T. But note that W is infinite-dimensional; that is, it has a basis of infinite length. Therefore, according to the same lemma, there is a unique linear map which maps vk to wk where wk {w  ,w ,...}
  1  2. Each such mapping cannot be expressed in terms of other mappings; that is what makes it unique. In other words, every mapping is linearly independent.

Then note that there is an infinite amount of linearly independent mappings. Therefore, they must span a space of infinite dimension. This contradicts our earlier assumption that L(V,W) had a basis of finite length. Thus, L(V,W) is infinite-dimensional. __

16

Suppose V is finite-dimensional with dimV > 1. Prove that there exist S,T L(V ) such that STTS.

This is a natural consequence of the fact that matrix multiplication is not commutative (or perhaps the other way around), but let us prove this with a concrete example.

Let T L(P2()) be the transformation that multiplies its input by x2, that is:

(T p)(x) = x3p(x)

for all x .

Let S L(P2()) be the transformation which differentiates its input:

Sp = p′

Let q = x2. Then,

ST q = S(x2)3 = Sx6 = 6x5.

However,

T Sq = T(2x) = 2x(2x)3 = 16x4

We see that STqTSq.