5

I've just began the study of linear transformations, and I'm still trying to grasp the concepts fully.

One theorem in my textbook is as follows:

Let $V$ and $W$ be vector spaces over the field $F$, and suppose that $(v_1, v_2, \ldots, v_n)$ is a basis for $V$. For $w_1, w_2, \ldots, w_n$ in $W$, there exists exactly one linear transformation $T: V \rightarrow W$ such that $T(v_i) = w_i$ for $i=1,2,\ldots,n$.

The author doesn't explain it, but gives the proof right away (which I understand). But I'm trying to figure out what this theorem actually states, and why it is so important? So in words it means: if I have a basis for my domain, and a basis for my codomain, then there exists just one linear transformation that links both of them.

So let's say I have a linear map $T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$, with $T(1,0) = (1,4)$ and $T(1,1)=(2,5)$. So because $(1,0)$ and $(1,1)$ is a basis for my domain, it is an implication of the theorem that $(1,4)$ and $(2,5)$ is automatically a basis for my codomain?

sunspots
  • 822
Kamil
  • 5,309
  • post the name of the theorem – Irrational Person Dec 24 '14 at 17:33
  • 1
    You don't state explicitly that $w_1, w_2, ... w_n$ is a basis for $W$ in the statement you're asking about (third paragraph). Is this the case? – John Dec 24 '14 at 17:34
  • It has no name, in my book it's just 'theorem 2.6'. A paragraph above I read: "One of the most important properties of a linear transformation is that it is completely determined by its action on a basis." So it probably has something to do with that. The author is S. Friedberg - Linear Algebra. – Kamil Dec 24 '14 at 17:36
  • 4
    It is definitely not assumed that $w_1,\ldots,w_n$ are a basis of $W$. In fact this most likely is not the right number of vectors to have a basis; that is only the case if $\dim V=\dim W$. – Marc van Leeuwen Dec 24 '14 at 17:40
  • Dear @Marc van Leeuwen, is it sufficient to say that $T_1=T_2$ if $T_1v_1=T_2v_1$ ? $v_1$ is one of the basis of V – When May 13 '15 at 07:54
  • @When No. Having the same value at one of the basis vectors is definitely not enough to conclude equality; the values at other basis vectors are unrelated to the values at $v_1$ and could well differ. – Marc van Leeuwen May 13 '15 at 09:13
  • Got it, appreciate it! – When May 13 '15 at 09:17

4 Answers4

11

The theorem says that any map from the finite set $\{v_1,\ldots,v_n\}$ to a vector space $W$ can be uniquely extended to a linear map $V\to W$; this is true if (and only if) $[v_1,\ldots,v_n]$ forms a basis of$~V$. It's importance is that it allows, at least in the case where $V,W$ are finite dimensional, any linear maps to be represented by finite information, namely by a matrix, and that every matrix so represents some linear map. In order to get there, we must also choose a basis in $W$; then by expressing each of the images $f(v_1),\ldots,f(v_n)$ in that basis, we find the columns of the matrix representing $f$ (with respect to $[v_1,\ldots,v_n]$ and the chosen basis of $W$). Note that this information only explicitly describes those $n~$images; the actual linear map is implicitly defined as its unique linear extension to all of$~V$. The existence part of the theorem ensures that we never need to worry whether there is actually a linear transformation that corresponds to a freely chosen matrix: one can always map $v_j$ to the vector represented by column$~j$, for all$~j$ at the same time.

It is only thanks to this theorem that we can work with matrices as if we work with the linear transformation they encode; as long as we fix our bases of $V$ and $W$, we have a bijection between linear transformations $V\to W$ on one hand and $m\times n$ matrices (where $m=\dim W$) on the other. In fact this bijection is itself linear, so an isomorphism of the $F$ vector spaces $\mathcal L(V,W)$ and $\operatorname{Mat}_{m,n}(F)$.

  • Is my statement of the theorem equivalent with the following? Let $V$ and $W$ be two vector spaces over $F$ and let $T: V \rightarrow W$ be a linear transformation. If $\beta = (u_1, ..., u_n)$ is an ordered basis of $V$, then for each $v \in V$, the vector $T(v)$ is a linear transformation of $T(u_1), ..., T(u_n) \in W$. That is, we have full information of $T$ if we know $T(u_1),...,T(u_n) \in W$, the image of basis vectors in W. – Kamil Dec 24 '14 at 19:39
  • 1
    @Kamil: Yes that's it. (But the second "linear transformation" should be linear combination). – Marc van Leeuwen Dec 24 '14 at 21:11
4

Here is a corollary of (the "uniqueness" part) of the theorem that reflects the way that it is often used:

Suppose that $T_1$ and $T_2$ are linear transformations from $V$ to $W$ such that $T_1(v_i) = T_2(v_i)$ for $i = 1,\dots,n$, where $(v_1,\dots,v_n)$. Then $T_1 = T_2$.

It is common to ask whether two linear transformations are the same, and this theorem gives us a good way to check.

The existence part can be phrased like so:

For any basis $(v_1,\dots,v_n)$ of $V$ and set of vectors $w_1,\dots,w_n \in W$, we can construct a linear transformation such that $T(v_i) = w_i$.

I find that it is helpful to consider the existence and uniqueness aspects separately.

Note that $T(v_1),\dots,T(v_n)$ will generally not be a basis for the codomain. For example, consider $T$ as given by $T(x,y) = (x,0)$. We know that $((1,0),(0,1))$ is a basis, but $(T(1,0),T(0,1))$ is not a basis.

In fact, we can deduce from this theorem that if there is a way to map the basis of the domain to the basis of the codomain, then the two vector spaces must be isomorphic. That is, any two spaces with the same dimension are isomorphic.

Ben Grossmann
  • 234,171
  • 12
  • 184
  • 355
2

No, because you don't even know the dimension of the codomain. You do get that $w_i$ are a spanning set for the image of $T$, whatever that is. This theorem as stated doesn't imply that, but it would follow from the proof. (Any vector $b$ in the image is mapped to from some vector $x$ in the domain; $x$ is some linear combination of the $v_i$; by linearity of $T$, $Tx$ is a linear combination of the $w_i$; so $b=Tx$ is a linear combination of the $w_i$.)

Ultimately this theorem is simpler than all that, though. It just says that you can uniquely define a linear map by its action on a basis of the domain.

Ian
  • 104,572
  • No, it's not necessarily true that $w_1,\dots,w_n$ form a basis for the image either, because there's nothing in the problem statement that says they're linearly independent. For example, the $w_i$'s could all be the same. – Jack Lee Dec 24 '14 at 17:36
  • @JackLee I caught that before seeing your comment and already changed it. Thanks. – Ian Dec 24 '14 at 17:37
  • @Ian. Ok, so you say "by linearity of $T$, the image of any linear combination of the $v_i$ is a linear combination of the $w_i$; so $b$ is a linear combination of the $w_i$." So you mean that, by choosing unique scalars (the coordinates with respect to a basis in the domain), they determine one vector in the domain, and by linearity, the same scalars determine another vector in the codomain? But if that's the case, what does linear mapping have to do with that? – Kamil Dec 24 '14 at 17:48
  • @Kamil We get $T \left ( \sum_{i=1}^n c_i v_i \right ) = \sum_{i=1}^n c_i T(v_i) = \sum_{i=1}^n c_i w_i$ from linearity. We needed linearity to "distribute" $T$ this way. This represents any $b$ in the image as a combination of the $w_i$, so the $w_i$ are a spanning set. – Ian Dec 24 '14 at 17:54
1

the way $T$ is defined on a basis $\{v_1,v_2, \cdots, v_n\}$ and the image $\{Tv_1 = w_1, Tv_2=w_2, \cdots, Tv_n=w_n\}$ and made $T$ linear and defined for every vector by $$T(x_1v_1+x_2v_2+\cdots+x_nv_n = x_1w_1+x_2w_2+\cdots+x_nw_n$$ in all of this, nothing is said about the vectors $\{w_1, w_2, \cdots, w_n \}.$ all you got is a linear transformation $T \colon V \rightarrow W.$ for all we know all $w_j$'s could be the zero vector.

further constraints on $T,$ like $T$ is $1-1$ or onto will reflect constraints on $w_j$'s. these also show in terms of the $rank(T).$

abel
  • 29,612