14

For $k \in \Bbb{N} = \{1,2,3,\dots\}$ and $\alpha \in (0,1)$, let us define $$ C^{k,\alpha} := \{ f : \Bbb{R} \to \Bbb{R} \,:\, f \in C^k \text{ with } f, f', \dots, f^{(k)} \text{ bounded and } [f^{(k)}]_{C^\alpha} < \infty \}, $$ where $$ [g]_{C^\alpha} := \sup_{x,y \in \Bbb{R}, x \neq y} \frac{|g(x)-g(y)|}{|x-y|^\alpha} \, . $$

It is not hard to see by using Taylor's formula for $f \in C^{k,\alpha}$ that for each $x \in \Bbb{R}$, there is a polynomial $P_x$ of degree $k$ (namely, the Taylor polynomial of $f$) satisfying $$ |f(x+h) - P_x (h)| \leq C \cdot |h|^{k+\alpha} \qquad \forall h \in \Bbb{R} \text{ with } |h| \leq 1, \tag{$\circledast$} $$ with a constant $C$ independent of $x,h$ (but depending on $f$).

It is claimed in Exercise 13.31 of certain notes by Martin Hairer that also the converse holds, i.e., if $f : \Bbb{R} \to \Bbb{R}$ and if there is a constant $C>0$, such that for each $x$, there is a polynomial $P_x$ of degree at most $k$ such that $(\circledast)$ holds, then $f \in C^{k,\alpha}$.

I would like to know a solution to this exercise, and even the case $k=1$ would be interesting to me.

To simplify the question a bit, let us assume that we already know $f$ to be $k$-times differentiable, with $f,f', \dots, f^{(k)}$ being bounded (I think this boundedness does not actually follow "automatically"), so we only need to establish $\alpha$-Hölder continuity of $f^{(k)}$.

After the fold, I discuss some of my ideas towards this problem. It might be better, however, to at first not read these ideas, since given that this is an exercise, they seem overly complicated (they refer to Campanato spaces, which are probably not known to everyone doing the exercise). Thus it might be better to start afresh instead of getting spoiled by my stupid thoughts :)


My thoughts only apply to the case $k=1$. Here, one can more or less easily show that necessarily $P_x (h) = f(x) + f'(x) h$. Then we get \begin{align*} & |f(x+h) - f(x-h) - 2h \cdot f'(x)| \\ & = |f(x+h) - f(x) - f'(x) h - (f(x-h) - f(x) - f'(x) \cdot (-h))| \\ & \leq 2C |h|^{1+\alpha} \end{align*} for $|h| \leq 1$. But using the fundamental theorem of calculus to write $f(x+h) - f(x-h) = \int_{x-h}^{x+h} f'(s) ds$, and writing $g = f'$, we thus get $$ \left| - \int_{x-h}^{x+h} g(s) - g(x) ds \right| \leq 2C \cdot |h|^{1+\alpha}. \tag{$\dagger$} $$ Now, it is known (see Lemma 0.0.15 in Some notes on Campanato spaces) that the norm $$ \| g \|_\ast = \|g\|_{L^1} + \sup_{x \in \Bbb{R}, 0 < h \leq C} \,\, \inf_{c \in \Bbb{R}} \,\, h^{-(1 + \alpha)} \int_{x-h}^{x+h} |g(t) - c| \, dt $$ is an equivalent norm for the Campanato space $L^{1,1+\alpha}_C (\Bbb{R})$. Strictly speaking, I am cheating a bit here, since $\Bbb{R}$ is not a bounded domain.

Thus, if in $(\dagger)$ above, we would have the absolute value inside the integral instead of outside, we would get a Campanato condition on the function $g = f'$. But it is known (see Wikipedia, or Theorem 0.0.22 of the notes I linked above) that $L^{1, 1+\alpha}_C = C^{0,\alpha}$ is the space of $\alpha$-Hölder continuous functions.

Thus, my hope is (I am currently investigating this) that one can adapt the arguments from the lecture notes above to also work in the case where the absolute value is outside of the integral.

As I said above, I don't think that this is the right solution, since it seems overly complicated :) I would be happy about any thoughts!

PhoemueX
  • 36,211
  • If I understood well you would like to know why the existence of polynomial implies the Holder regularity? – Guy Fsone Nov 04 '17 at 14:23
  • @GuyFsone: Yes, indeed. I want to know why/if existence of the polynomials implies $\alpha$-Hölder regularity of the $k$-th derivative of $f$. – PhoemueX Nov 06 '17 at 08:33

2 Answers2

5

We have $$ |f(x+h)-P_{x}(h)|\leq c|h|^{k+\alpha}% $$ for all $x\in\mathbb{R}$ and all $h$ with $|h|\leq1$. Write $$ P_{x}(h)=\sum_{n=0}^{k}\frac{1}{n!}a_{n}(x)h^{n}. $$ Taking $h=0$ you get $f(x)=a_{0}(x)$ so $$ \left\vert f(x+h)-f(x)-\sum_{n=1}^{k}\frac{1}{n!}a_{n}(x)h^{n}\right\vert \leq c|h|^{k+\alpha}. $$ Next, by the triangle inequality \begin{align*} \left\vert f(x+h)-f(x)-a_{1}(x)h\right\vert & \leq\left\vert f(x+h)-f(x)-\sum _{n=1}^{k}\frac{1}{n!}a_{n}(x)h^{n}\right\vert +\sum_{n=2}^{k}|a_{n}% (x)||h|^{n}\\ & \leq c|h|^{k+\alpha}+\sum_{n=2}^{k}\frac{1}{n!}|a_{n}(x)||h|^{n}% \end{align*} and so% $$ \left\vert \frac{f(x+h)-f(x)-a_{1}(x)h}{h}\right\vert \leq c|h|^{k-1+\alpha }+\sum_{n=2}^{k}\frac{1}{n!}|a_{n}(x)||h|^{n-1}\rightarrow0 $$ as $h\rightarrow0$, which shows that $f$ is differentiable with $f^{\prime }(x)=a_{1}(x)$.

%%%%%This part feels wrong but I cannot find the mistake%%%%%%%%%%%% Hence, for $k=1$, $$ \left\vert f(x+h)-f(x)-f^{\prime}(x)h\right\vert \leq c|h|^{1+\alpha}. $$

In turn, replacing $x$ with $x-h$ we get $$ \left\vert f(x)-f(x-h)-f^{\prime}(x-h)h\right\vert \leq c|h|^{1+\alpha}% $$ while replacing $h$ with $-h$ gives $$ \left\vert f(x-h)-f(x)+f^{\prime}(x)h\right\vert \leq c|h|^{1+\alpha}. $$ Now \begin{align*} |f^{\prime}(x)-f^{\prime}(x-h)| & \leq\left\vert \frac{f(x-h)-f(x)} {h}+f^{\prime}(x)\right\vert +\left\vert -\frac{f(x-h)-f(x)}{h}-f^{\prime }(x-h)\right\vert \\ & =\left\vert \frac{f(x-h)-f(x)}{h}+f^{\prime}(x)\right\vert +\left\vert \frac{f(x)-f(x-h)}{h}-f^{\prime}(x-h)\right\vert \\ & \leq2c|h|^{\alpha}. \end{align*}

This proves that $f^{\prime}$ is Holder's continuous.

%%%%%%%%%%%%%%%End funny part %%%%%%%%%%%%

For $k\geq2$ we first prove that all the functions $a_{n}$ are bounded in $[a,b]$. We will use the following fact about polynomials. Given $k\in\mathbb{N}$, let $V$ be the vector space of all polynomials $P:[-1,1]\rightarrow\mathbb{R}$ of degree less than or equal to $k$. Given $P\in V$, let $\Vert P\Vert:=\max\{|a_{0}|,\ldots,|a_{k}|\}$, where $P(t)=a_{0}+\cdots+a_{k}t^{k}$, $t\in\lbrack-1,1]$. Then $\Vert\cdot\Vert$ is a norm in $V$.

Since the vector space $V$ has finite dimension $k$, all norms are equivalent. In particular,% $$ c_{1}\Vert P\Vert_{L^{\infty}([-1,1])}\leq\Vert P\Vert\leq c_{2}\Vert P\Vert_{L^{\infty}([-1,1])}% $$ for all $P\in V$ and for some constants $c_{1}>0$ and $c_{2}>0$.

Now let $P\in V$ be such that $\Vert P\Vert_{L^{\infty}([-1,1])}\leq L$. Then by the previous inequality,% $$ \Vert P\Vert:=\max\{|a_{0}|,\ldots,|a_{k}|\}\leq c_{2}\Vert P\Vert_{L^{\infty }([-1,1])}\leq c_{2}L, $$ which implies that $|a_{n}|\leq c_{2}L$ for all $n=1, \ldots, k$.

Since $f$ is continuous, there exists $$ M=\max_{[a-1,b+1]}|f(x)|<\infty $$ and so for $-1\leq h\leq1$ we have \begin{align*} \left\vert \sum_{n=1}^{k}\frac{1}{n!}a_{n}(x)h^{n}\right\vert & \leq\left\vert f(x+h)-f(x)-\sum_{n=1}^{k}\frac{1}{n!}a_{n}(x)h^{n}\right\vert +\left\vert f(x+h)-f(x)\right\vert \\ & \leq c|h|^{k+\alpha}+2M\leq c+2M. \end{align*} Hence, $$ \left\vert \sum_{n=1}^{k}\frac{1}{n!}a_{n}(x)h^{n}\right\vert \leq c+2M $$ for all $h\in\lbrack-1,1]$. It follows from the property about polynomials above that there exists a constant $C=C(a,b,M,c)>0$ such that% $$ |a_{n}(x)|\leq C $$ for all $x\in\lbrack a,b]$ and all $n=1,\ldots,k$.

We can now apply Theorem 3 in Oliver's paper to conclude that $a_{n}% (x)=f^{(n)}(x)$.

Gio67
  • 21,870
  • Thank you very much for your answer! I will have a detailed look at it this evening. – PhoemueX Nov 06 '17 at 08:33
  • You can try using the Lagrange reminder of the Taylor series. – user48672 Nov 06 '17 at 23:14
  • Maybe there is a wrong minus sign in the "funny" part when you add and subtract the increment. Apart from this it seems correct: in fact you see that you do not get continuity of the derivatives if $\alpha \in \mathbb{N}.$ This corresponds to the difference between Holder and Zygmund spaces. – Kore-N Nov 08 '17 at 17:05
  • I cannot see the sign mistake. Where is it exactly? Thanks! – Gio67 Nov 09 '17 at 02:57
  • @Gio67: I am sorry that it took me so long to come back to your answer. I think that everything is correct. Also the reference to the paper by Oliver is interesting, since it shows that the notion in question has a name. If I understand correctly, what you show implies that 1) if $k=1$, then $f \in C^{k,\alpha}$, as desired. 2) if $k > 1$, then $f$ is $k$-times differentiable, and the polynomial $P_x$ is in fact the Taylor polynomial of $f$. – PhoemueX Nov 09 '17 at 17:18
  • Do these claims also show $f \in C^{k,\alpha}$ for $k > 1$? At least I do not easily see at the moment that they do... Note: Even if not, I will award the bounty to you if no other answer comes up. Thank you :) – PhoemueX Nov 09 '17 at 17:19
  • correct. I have not proved that the higher order derivatives are Holder – Gio67 Nov 10 '17 at 20:15
  • @PhoemueX, would you all mind to take a look of this post? I wish to find a dense subset of $\text{Lip}$ but I failed. – user284331 Apr 02 '18 at 02:36
  • Can your proof be extended to general $\mathbb{R}^n$? The part ('funny part') of dividing the equalities by $h$ might need more illustrations in $\mathbb{R}^n$ as $n \geq 2$. – Stack_Underflow Feb 24 '24 at 22:19
  • 1
    Yes, I think so. One could work with partial derivatives. I cannot see a problem, although I would need to sit down and check. – Gio67 Feb 28 '24 at 15:36
  • @Gio67 The subtle difference is that (by copying your method to $\mathbb{R}^n$, $n \geq 2$), after doing triangle inequality, one may end up with $\vert (\nabla f^{T}(x)-\nabla f^{T}(x-hd)) \cdot hd\vert \leq 2c \vert h \vert^{1+\alpha}$, where $d$ is a unit vector of any direction. Then $\vert (\nabla f^{T}(x)-\nabla f^{T}(x-hd))\cdot d\vert \leq 2c \vert h \vert^{\alpha}$. With this we can show Holder continuity of partial derivatives along only one direction (taking $d = e_i$). But I can't show that's true for other directions. Would you mind illustrating this a little bit? Thank you. – Stack_Underflow Mar 02 '24 at 20:01
  • There is a book called "Manifolds, Tensor Analysis, and Applications" by Ralph Abraham , Jerrold E. Marsden , Tudor Ratiu. On Page 99 they have a supplement called "The converse to Taylor's theorem". The proof is not easy though. – Gio67 Mar 25 '24 at 17:20
0

Gio67's answer has shown that $f$ has ordinary derivatives up to order $k$ and $f\in C^{1,\alpha}$. Maybe a supplement that shows $f\in C^{k,\alpha}$ is convenient for the readers. It is sufficient to show that $f^{(k)}\in C^\alpha$.

Since $f$ has ordinary derivatives up to order $k$, the polynomial $P_x(h)$ is actually the Taylor expansion of $f$. Then for each $x\in\mathbb{R}$, $t\in [0,1]$, $|h|\leq1$, we have $$ \left|f(x+th)-\sum_{j=0}^k \frac{f^{(j)}(x)}{j!}(th)^{j} \right| \leq c t^{k+\alpha}|h|^{k+\alpha}.$$ Consider the expansion at $x+h$, we have $$ \left|f(x+th)-\sum_{j=0}^k \frac{f^{(j)}(x+h)}{j!}((t-1)h)^{j} \right| \leq c (1-t)^{k+\alpha}|h|^{k+\alpha}. $$ By the triangle inequality $$ \left| \sum_{j=0}^k \frac{h^j}{j!} \left( (t-1)^{j} f^{(j)}(x+h)-t^jf^{(j)}(x) \right) \right| \leq c \left( t^{k+\alpha}+(1-t)^{k+\alpha} \right) |h|^{k+\alpha}. $$ Denote the summation on the left side by $F_{t}(x,h)$. We claim that there exist constants $a_0,\dots,a_k$ and $t_0,\dots,t_k$ only dependent on $k$ such that $$\sum_{i=0}^k a_i F_{t_i}(x,h) = \frac{h^k}{k!}\left( f^{(k)}(x+h)-f^{(k)}(x) \right).$$ Then we conclude by the triangle inequality that $$ \left| \frac{h^k}{k!}\left( f^{(k)}(x+h)-f^{(k)}(x) \right) \right| \leq c|h|^{k+\alpha}\sum_{i=0}^k a_i\left( t_i^{k+\alpha}+(1-t_i)^{k+\alpha} \right)=c'(k,\alpha)|h|^{k+\alpha},$$ which implies that $f^{(k)}\in C^\alpha$ clearly. It remains to prove the claim. Firstly, choose $t_0,\dots,t_k\in [0,1]$ which are different from each other. Thus the matrix $$ B=\left( \begin{array}{l} 1 & 1 & 1 & \cdots & 1 \\ t_0 & t_1 & t_2 & \cdots & t_k \\ t_0^2 & t_1^2 & t_2^2 & \cdots & t_k^2 \\ \vdots & \vdots & \vdots & & \vdots \\ t_0^k & t_1^k & t_2^k & \cdots & t_k^k \\ \end{array} \right) $$ is invertible (it is an elementary fact in linear algebra, search for Vandermonde matrix). Let $a_0,\dots,a_k$ be the solution of $$ B\cdot (a_0,\dots,a_k)^T = (0,\dots,0,1)^T,$$ i.e., $\sum_{i=0}^k a_i t_i^j=0$ for $j=0,1,\dots,k-1$ and $\sum_{i=0}^k a_i t_i^k=1$. This also implies $\sum_{i=0}^k a_i (t_i-1)^j=0$ for $j=0,1,\dots,k-1$ and $\sum_{i=0}^k a_i (t_i-1)^k=1$ (one can verify this by simply expand $(t_i-1)^j$). Then \begin{equation} \begin{split} \sum_{i=0}^k a_i F_{t_i}(x,h) &= \sum_{i=0}^k a_i \sum_{j=0}^k \frac{h^j}{j!} \left( (t_i-1)^{j} f^{(j)}(x+h)-t_i^jf^{(j)}(x) \right) \\ &= \sum_{j=0}^k \frac{h^j}{j!} \left( f^{(j)}(x+h)\sum_{i=0}^k a_i(t_i-1)^j - f^{(j)}(x) \sum_{i=0}^k a_i t_i^j \right) \\ &= \frac{h^k}{k!} \left( f^{(k)}(x+h) - f^{(k)}(x) \right), \end{split} \end{equation} as we claimed.