13

For all finite vector spaces and all linear transformations to/from those spaces, how can you prove/show from the definition of a linear transformation that all linear transformations can be calculated using a matrix.

sinθ
  • 323
  • 1
  • 3
  • 8
  • 1
    Think that each vector can be decomposed into the standard basis, i.e., and $n$-dim vector $\mathbf{w}$ is equal to $\sum_{i=1}^{n} w_{i} \mathbf{e}_{i}$. What are the properties of linear transformations? – megas Feb 28 '15 at 22:05
  • 1
    Careful, this is true for finite vector space only – Davide F. Feb 28 '15 at 22:06
  • @DavideF. That's above my pay-grade – sinθ Feb 28 '15 at 22:08
  • Yea, was only suggesting to edit the statement "For all vector space" in "For all finite vector space". Plus when you will understand the answer to your quest you will understand why. – Davide F. Feb 28 '15 at 22:10
  • What do you mean by "calculated"? – snar Feb 28 '15 at 22:16
  • The proof would start by writing the definition of a linear transformation. There might be subtle differences in presentation depending on which author you get this from. – David K Feb 28 '15 at 22:19
  • @snarski Matrix multiplication with a coordinate vector – sinθ Feb 28 '15 at 22:28

3 Answers3

18

If you have a linear transform $L : X \rightarrow Y$, where $X$ and $Y$ are finite dimensional linear spaces, then you choose a basis $\{ x_{i} \}_{i=1}^{n}$ of $X$ and a basis $\{ y_{j} \}_{j=1}^{m}$ of $Y$, and write $$ Lx_{n} = \alpha_{1,n}y_{1}+\alpha_{2,n}y_{2}+\cdots+\alpha_{m,n}y_{m}. $$ The constants $\alpha_{n,m}$ are unique. Every $x \in X$ can be written uniquely as $$ x = \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_n x_n. $$ By linearity $$ \begin{align} Lx & = \beta_1 Lx_1 + \beta_2 Lx_2 + \cdots \beta_n Lx_n \\ \\ & = \beta_1 (\alpha_{1,1} y_1 + \alpha_{2,1}y_2 + \cdots + \alpha_{m,1}y_m) \\ & + \beta_2 (\alpha_{1,2} y_1 + \alpha_{2,2}y_2 + \cdots + \alpha_{m,2}y_m) \\ & + \cdots + \\ & + \beta_n (\alpha_{1,n} y_1 + \alpha_{2,n}y_2 + \cdots + \alpha_{m,n}y_m) \\ \\ & = (\alpha_{1,1}\beta_1+\alpha_{1,2}\beta_2+\cdots+\alpha_{1,n}\beta_{n})y_1 \\ & + (\alpha_{2,1}\beta_1+\alpha_{2,2}\beta_2+\cdots+\alpha_{2,n}\beta_{n})y_2 \\ & + \cdots + \\ & + (\alpha_{m,1}\beta_1+\alpha_{m,2}\beta_2+\cdots+\alpha_{m,n}\beta_{n})y_n \end{align} $$ So, the action of $L$ is uniquely determined by the matrix $[\alpha_{i,j}]$ as follows: Start with $x \in X$, write $x = \sum_{i=1}^{n}\beta_{i}x_{i}$, then perform matrix multiply $[\alpha_{j,i}][\beta_{i}]$ with gives $[\gamma_{j}]$, and you then reconstruct $Lx = \gamma_1 y_1+\gamma_2 y_2 + \cdots \gamma_m y_m$. Therefore, $L$ is completely determined by the $n\times m$ matrix $[\alpha_{i,j}]$ as defined above. Conversely, every such matrix determines a linear $L$ whose matrix representation is the given matrix.

Disintegrating By Parts
  • 91,908
  • 6
  • 76
  • 168
3

Each linear transformation $T: V \to W$ is completely determined by what it does to a set of basis vectors of $V$.

That is, suppose we have a linear transformation $T: V \to W$ where $V$ and $W$ are finite-dimensional vector spaces with bases $\mathcal{B}_V = \{e_1, e_2, \ldots, e_n\}$ and $\mathcal{B}_W = \{f_1, f_2, \ldots, f_m\}$ respectively.

Since $T$ is linear, if we know $T(e_i)$ for each $e_i \in \mathcal{B}$, then we know $T(c_1e_1 + c_2e_2 + \ldots c_ne_n)$. Thus, we just need to know what $T(e_i)$ is, for each $i \in \{1, 2, \ldots, n\}$.

Of course, $T(e_i)$, as a vector of $W$, can be uniquely represented as a linear combination of the basis vectors in $\mathcal{B}_W$. That is, for each basis vector $e_i \in \mathcal{B}_V$, there exist some $m$ scalars $a_{i, j}$ scalars such that

$$T(e_i) = a_{i,1}f_1 + a_{i, 2}f_2 + \ldots a_{i, m}f_m.$$

This is the idea behind understanding why an $m \times n$ table of numbers will determine any linear map between finite dimensional vector spaces. What I've written is certainly not water-tight, but hopefully it gives you some idea of why it works.

pjs36
  • 18,400
0

The most general way to show this is to show the dual space of linear transformations on a finite dimensional vector space V over a field F is isomorphic to the vector space of m x n matrices with coeffecients in the same field F. However, although the proof isn't difficult,it needs considerably more machinery then just the definition of a linear transformation and building to the isomorphism theorem takes a number of pages. An excellent and self contained presentation of linear transformations and matrices which ends with the isomorphism theorem can be found in Chapter 5 of the beautiful online textbook by S. Gill Williamson of The University of California at San Diego. It's Theorem 5.13. But my strongest advice to you is to work through the entire chapter-not only is it a really beautiful and clear presentation of the structure of the vector space L(U,V), you'll only fully understand the proof after working through the chapter. It's worth the effort.