0

Let $T : \mathbb{R^3} \to \mathbb{R}$ be a linear map (linear transformation?) $$Tx=x_1-2x_2+x_3.$$ Determine the derivative of the linear map and the matrix of it.

Apologies if I'm getting the name wrong. Wikipedia translated the page from my native language to this.

The matrix is simply $[1,-2,1]$ right?

However to find the derivative we had some weird expression that I couldn't find on our course material which looks as follows $$T(x_0+h)-T(x_0)=Lh+||h||\varepsilon(x_0,h)$$

The $RHS$ reminds me of the numerator of the difference quotient, but I guess that's not what we're after here?

Edit:

Found the definition from our material apparently a function $f : E \to \mathbb{R}$ is differentiable at a point $x$ iff there exist a linear mapping $T_x : \mathbb{R^n} \to \mathbb{R^m}$ and in some neighborhood of zero there is a function defined $\varepsilon(h)$ that satisfies $$f(x+h)-f(x)=T_x(h)+||h||\varepsilon(h)$$ and $$\lim_{h \to 0} \varepsilon(h)=0.$$

Arctic Char
  • 16,972

2 Answers2

1

By definition, a function $f:\mathbb{R^n}\to\mathbb{R^m}$ is differentiable at a point $x_0\in\mathbb{R^n}$ if there is a matrix $A\in M_{m\times n}(\mathbb{R})$ such that $f(x)=f(x_0)+A(x-x_0)+o(||x-x_0||)$ holds when $x\to x_0$. Then the matrix $A$ is the derivative of $f$ at the point $x_0$. You can think of it as the best approximation of $f$ by a linear transformation in some neighborhood of the point $x_0$.

Now, if $T:\mathbb{R^n}\to\mathbb{R^m}$ is a linear transformation then there is a matrix $A\in M_{m\times n}(\mathbb{R})$ such that $T(x)=Ax$ for all $x$, this is a standard result in linear algebra. It is not hard to guess that $T$ is differentiable everywhere, and the derivative is always equal to $A$ – after all, this is what we have in the theory of functions of a single variable. The multivariate case is not different.

But a guess is not enough, we want a formal proof. So let $x_0\in\mathbb{R^n}$ be any point. For every $x\in\mathbb{R^n}$ we have:

$T(x)-T(x_0)-A(x-x_0)=T(x-x_0)-A(x-x_0)=A(x-x_0)-A(x-x_0)=0$

And hence $T(x)=T(x_0)+A(x-x_0)+0$. The function $0$ is obviously $o(||x-x_0||)$ when $x\to x_0$ and so we indeed we have the required equality. The matrix $A$ is indeed the derivative of $T$ at the point $x_0$.

In your specific case the matrix which corresponds to your transformation is $(1,-2,1)$, so it is the derivative everywhere.

posilon
  • 2,283
Mark
  • 43,582
0

Your $T$ is an example of a linear functional, a linear function/transformation which sends vectors to $\mathbb{R}$ (or another base field).

It's partial derivatives are just like the derivatives of other functions you've seen, like if $f(x,y,z) = 2x + y - z$, then $\frac{\partial f}{\partial x} = 2$.

And its whole derivative will be its gradient, but interpreted as a linear map $L = \nabla T: \mathbb{R}^3 \to \mathbb{R}$.

Does that make sense?

  • Indeed it does, however, the expression is still a bit odd would you happen to know where it is coming from? –  Sep 02 '20 at 18:59