7

So a function $f: E \to F$ between the normed spaces $E,F$ is called differentiable in $x \in E$ if there exists a bounded linear map $Df(x): E \to F$ such that for every $h \in E$ we have $$f(x+h)=f(x)+Df(x)h + o(||h||). \tag{1}$$ If $f$ is differentiable for every $x \in E$ and $Df: x \mapsto Df(x)$ is differentiable for every $x \in E$ too we get analogously $$Df(x+e)=Df(x)+D^2f(x)e+o(||e||). \tag{2}$$ Then $f$ is called twice differentiable and for every $h\in E$ we have the "Taylor expansion of second degree" $$f(x+h)=f(x)+Df(x)h+\frac{1}{2}D^2f(x)[h] + o(||h||^2), \tag{3}$$ where $D^2f(x)[h]:=(D^2f(x)h)h$ for better readability.

I have two questions:

  • How can $(3)$ be proven without resorting to the "standard proof" of using integrals? I want to show it by only using the linear approximations given in $(1)$ and $(2)$. Inserting $(2)$ in $(1)$ doesn't result in something useful though. Can this be done?
  • Can $(3)$ be used as an alternative definition off twice-differentiability? Analogously what about the general case of $n$-times differentiability: $$ f(x+h) = f(x) + \sum_{j=1}^{n} \frac{1}{j!} D^jf(x)[h] + o(\|h\|^n) \tag{4}$$
RobPratt
  • 50,938
Jannik Pitt
  • 2,185
  • If integrals are forbidden, is it fair game to use the mean value inequality ? – Gabriel Romon Jun 26 '19 at 19:57
  • 1
    I'd say that in this case, using Landau notation makes things more difficult, and not less. As you don't have a precise formula for $o(||h||)$, you can't manipulate (1) and (2) to let the error terms cancel. – Sudix Jun 28 '19 at 05:48
  • Please answer my question. – Gabriel Romon Jun 28 '19 at 19:31
  • @GabrielRomon Sorry, I didn't see your comment. Just using the non-integral version of the mean-value inequality would be fine. – Jannik Pitt Jun 28 '19 at 21:30

2 Answers2

4

A few days back I wrote an answer with some detail about Taylor polynomials for maps between Banach spaces. You can see my answer here. What I proved is that (I'm sorry about the differences in notation)

Taylor Expansion Theorem:

Let $V$ and $W$ be Banach spaces over the field $\Bbb{R}$, let $U$ be an open subset of $V$, and fix a point $a \in U$. Let $f:U \to W$ be a given function which is $n$ times differentiable at $a$ (in the Frechet differentiable sense). Define the Taylor polynomial $T_{n,f}:V \to W$ by \begin{equation} T_{n,f}(h) = f(a) + \dfrac{df_a(h)}{1!} + \dfrac{d^2f_a(h)^2}{2!} + \dots + \dfrac{d^nf_a(h)^n}{n!} \end{equation} Then, $f(a+h) - T_{n,f}(h) = o(\lVert h \rVert^n)$.

Explicitly, the claim is that for every $\varepsilon > 0$, there is a $\delta > 0$ such that for all $h \in V$, if $\lVert h \rVert < \delta$, then \begin{equation} \lVert f(a+h) - T_{n,f}(h) \rVert \leq \varepsilon \lVert h \rVert^{n}. \end{equation}

The proof is pretty short if you know what you're doing. The idea is to use induction, and most importantly, the mean-value inequality for maps between Banach spaces. I don't think it is possible to derive $(3)$ directly from $(1)$ and $(2)$ alone, because $(2)$ talks about how the derivative $Df$ changes, while $(1)$ talks about how the function $f$ changes, and ultimately $(3)$ talks about how much $f$ changes. So, you somehow have to relate changes in $Df$ to changes in $f$... this is roughly speaking, what the mean-value inequality does.

The proof I showed in my other answer is pretty much from Henri Cartan's excellent book Differential Calculus. Also, Henri Cartan's book has a proof of the mean-value inequality which doesn't rely on integrals. Alternatively, you can take a look at Loomis and Sternberg's book Advanced Calculus. Here, they prove the mean value inequality in a rather elementary way without integrals, and it's also a relatively short proof. It is proven in Chapter 3, Theorem 7.4 (which uses theorem 7.3); this is on page 148-149 of the book (I prefer this proof to Cartan's proof).

For your other question, I assume you mean the following:

Does the existence of a polynomial $P$, which equals $f$ up to order $2$ at $x$ imply that $f$ is twice differentiable at $x$? Or more precisely, does the existence of a continuous linear map $A_1:E \to F$, and a symmetric continuous bilinear map $A_2:E \times E \to F$ such that for all $h \in E$, \begin{equation} f(x+h) = f(x) + A_1(h) + A_2(h,h) + o(\lVert h \rVert^2) \end{equation} imply that $f$ is twice Frechet differentiable at $x$?

The answer to this question is no. We can see this even in the single variable case (this following example is from Spivak's Calculus, page 413, 3rd edition). Take $E=F=\Bbb{R}$, and define $f: \Bbb{R} \to \Bbb{R}$ by \begin{equation} f(x) = \begin{cases} x^{n+1}& \text{if $x$ irrational} \\ 0 & \text{if $x$ rational} \end{cases} \end{equation} ($n\geq 2$). Then choose $x=0$ and the zero polynomial $P \equiv 0$. It is easy to verify that \begin{equation} f(0 + h) = 0 + o(|h|^n) \end{equation} However, if $a \neq 0$, then $f'(a)$ doesn't exist so $f''(0)$ is not even defined. Hence, what this shows is that the existence of a well-approximating polynomial does not guarantee that the function is sufficiently differentable.

peek-a-boo
  • 65,833
  • What if $f$ is assumed to be continuous? Can there still be a second-order polynomial approximating $f$ without it being twice-differentiable? – Jannik Pitt Jun 29 '19 at 11:44
  • @JannikPitt There is a "partial converse" to Taylor's theorem, if you assume enough regularity. For the one-dimensional case,refer to this post: https://mathoverflow.net/questions/88501/converse-of-taylors-theorem. For the Banach space version, refer to either Abraham Robbin: Transversal mappings and flows (Theorem $2.1$) or see Abraham Marsden Ratiu: Manifolds, Tensor Analysis and Applications, supplement 2.4B (3rd edition). While the statement of the theorem is very nice, the proof seems pretty involved. If you want, I could just add the statement of the converse into my answer. – peek-a-boo Jun 29 '19 at 12:10
-3

The differential Df(x) is a linear map that approximates your function f, according to your relation (1). And that linear map depends on x (the point around which you consider the Taylor expansion). The map Df , which associates a vector to a linear map doesn't have to be linear indeed. The second differential DDf(x) is a linear map that associates a vector to a linear map , and so on. Your relation (1) tells us that the differential of f exists (definition). Your relation (2) tells us that the differential of Df exists (definition). Your relation (3) tells us that there is a valid finite second order (degree) Taylor expansion of f at x (also a definition). From (1) and (2) you cannot derive (3). That's not always possible even for real function , even more so in this abstract setting. In a valid, finite Taylor expansion, the remainder must grow slower than the last term of the Taylor expansion. And no, (3) is not equivalent to second order differentiability of f.

  • Careful: The object $Df(x)$ is a linear map, while $x \mapsto Df(x)$ is a function taking vectors to linear maps, which in general is not linear. So $Df(x+y)≠Df(x)+Df(y)$ (why would the be equal?). – Jannik Pitt Jun 28 '19 at 10:56
  • I'll edit my answer , in order to clarify. – Cristian Dumitrescu Jun 28 '19 at 11:30
  • The second derivative is a bilinear map (technically it isn't but it can be viewed as such), how should that approximate $f-Df(x)$? – Jannik Pitt Jun 28 '19 at 21:31
  • I edited my answer @JannikPitt . Have a look. So from (1) and (2) you cannot derive (3), and (3) is not equivalent to twicw-differentiability. Relation (3), by definition tells us that f has a valid second order Taylor expansion , that's all. It's a definition. It took me a little bit of time, I'm a bit rusty. – Cristian Dumitrescu Jun 28 '19 at 21:57
  • Your confusion might be caused by the fact that the existence of a first order Taylor expansion is very similar to the definition of differentiability. That connection is broken for higher orders. – Cristian Dumitrescu Jun 28 '19 at 22:04