1

I'm working on an exercise involving the linear transformation $ :\Bbb R^3→\Bbb R^3$ defined by: $ ( , , ) = ( 2 + , + , 2 − ).$ I need to find bases $$ and $$ of $\Bbb R^3$ such that the matrix representation $(,)$ of $$ is in the form described in the following proposition from my textbook:

" Proposition 1.3: Let $ : → $ be a linear transformation. Then, there exist bases $$ for $$ and $$ for $$ such that: $$( , ) = \pmatrix{A_1&A_2\\A_3&A_4}$$ where $A_1$ is a $ × $ identity matrix, and the other blocks are zero matrices of appropriate sizes. "

For this problem, I am not allowed to use eigenvalues or determinants. I have no idea how I am supposed to construct the bases $$ and $.$ Some sources say to find the basis for $\operatorname{im}(T)$ and $\ker(T)$ but I don't understand why.

The basis for $\operatorname{im}(T)$ is ${(2,0,2),(1,1,0)}$. The basis for $\ker(T)$ is ${(-0.5,1,-1)}$.

Before I can proceed to the next steps, can someone offer a clearer way to explain the reasoning behind the construction of these bases? I do not understand. Thank you in advance for your help.

amWhy
  • 210,739
Falala
  • 21
  • 5
  • Welcome to MSE. For some basic information about writing mathematics at this site see, e.g., basic help on mathjax notation, mathjax tutorial and quick reference, main meta site math tutorial and equation editing how-to. – José Carlos Santos Dec 02 '24 at 16:14
  • It seems that there are two different definitions for $T$ here: one where $T$ is a linear map from $\mathbb R^3 \to \mathbb R^3$ (given in the first paragraph of your question), and one where $T$ is a linear map from $V$, which is not necessarily 3-dimensional, to $W$, which seems to be 3-dimensional (this definition is given in proposition 1.3). Which definition are you using here for $T$? – Mahmoud Dec 02 '24 at 16:34
  • 1
    @mhdadk Here, I am using T: R3 -> R3. The other definition is just to give you the general proposition given in the textbook. If we apply the proposition to our exercise, I guess that would mean that AT should equal to [ (1,0,0) (0,1,0) (0,0,1) ] or [ (1,0,0) (0,1,0) (0,0,0) ] – Falala Dec 02 '24 at 16:39
  • Reasoning backwards may help: for $\alpha,\beta$ to be a solution, we must have, for some $k\in{0,1,2,3}$: $\alpha_{k+1},\dots,\alpha_3$ is a basis of $\ker T$ and $\beta_1,\dots,\beta_k$ is a basis of $\operatorname{im}T$. Moreover, $T(\alpha_i)=\beta_i$ for $i=1,\dots,k$. Please edit your post to include at least a calculation of $\ker T$ and $\operatorname{im}T$, otherwise it risks to get closed for lack of "context". – Anne Bauval Dec 02 '24 at 17:06
  • So you now have two of three basis vectors for the codomain and one of three basis vectors for the domain. Next step: Can you find vectors that map to $(2,0,2)$ and $(1,1,0)$? – Ted Shifrin Dec 02 '24 at 19:42
  • Are you sure that this is the correct form of $T$? I know you can't use eigenvalues for your purposes, but its still a correct method. But computing them gives $\lambda = 1\pm \sqrt{2},0$ which contradicts the Proposition 1.3. – Andrew Dec 03 '24 at 18:32
  • @Andrew the OP is being asked to come up with Rank Normal Form. It has nothing to do with eigenvalues; the basis for the co-domain will typically be different than the basis for the domain. – user8675309 Dec 03 '24 at 18:46

1 Answers1

1

Have you learned the Rank-Nullity theorem? It states that $$\text{dim }V = \text{dim im }T + \text{dim ker }T,$$ where dim refers to the dimension of the corresponding vector space. You can verify this in your case, as the image has dimension 2 and kernel has dimension 1.

To construct our bases, we first start with a basis for ker$(T), v_3$. This is so that we obtain the zero columns on the right. We then extend our basis for ker$(T)$ to a basis $\{v_1, v_2, v_3\}$. for $V$ (in this case $\mathbb{R}^3$). And now we choose a basis for im$(T)$ by $\{w_1, w_2\} = \{Tv_1, Tv_2\}$. We choose this basis specifically so that the upper left corner is an identity matrix.

In order to prove $\{Tv_1, Tv_2\}$ is indeed a basis for im$(T)$, we first note that it is LI. (Can you figure out why?) And with Rank-Nullity, we know it is the correct amount of LI vectors to be a basis. Finally we extend this to a basis $\{w_1, w_2, w_3\}$ of $W$.

The matrix of $T$ with respect to bases $\alpha, \beta$ is such that the $n$th column is the coordinate vector of $T(\alpha_n)$ with respect to $\beta$. By construction, we know that our matrix therefore has the required form.

altwoa
  • 1,453
  • 4
  • 15