3

I have been wondering about the idea of functions that are weakly differentiable. My intuition tells me that the weak derivative allows one to differentiate functions that either have a removable discontinuity or have a kink. Functions with a jump that cannot be "repaired" are not weakly differentiable.

Furthermore, it seems that if we take the weak derivative of a non-classically differentiable function $f$ with a kink at some point $x$, then the weak derivative will have a jump at $x$ since the limits of the classical derivative approaching $x$ from the left and the right will differ. It would therefore follow that the weak derivative of $f$ is not weakly differentiable. Is this idea correct?

If so, does that imply that in order for the second weak derivative to exist, the function is equivalent to (in the Lebesgue sense) a classically differentiable function?

Harold
  • 83
  • Does your question refer to $\Bbb{R}$ or $\Bbb{R}^n$ in general? In the first case, it is well-known that every weakly differentiable function is absolutely continuous (i.e. it has an absolutely continuous representative), so that if $f$ is twice weakly differentiable, then $f'$ has a continuous representative, so that $f$ has a $C^1$ representative. – PhoemueX Jul 23 '14 at 09:08
  • I am just considering $\Bbb{R}$ . Would you be able to provide a reference to a text book where this is discussed? Thanks – Harold Jul 23 '14 at 09:17
  • 1
    I will look up a book when I am home. In the meantime, if you already know that $C^\infty \cap W^{1,1}$ is dense in the Sobolev space $W^{1,1}((a,b))$, you can take a sequence of smooth functions $(f_n)_n \in C^\infty \cap W^{1,1}$ converging to $f \in W^{1,1}$. You then have $f_n(x) - f_n(c) = \int_c^x f_n'(t) , dt \to \int_c^x f'(t) ,dt$ for all $x \in (a,b)$ and $c \in (a,b)$ fixed. You can then conclude (how?) that this converngence is uniform in $x$, thus $f_n \to g$ uniformly for some $g$ (show that $g=f$ a.e.), with $g(x) = g(c) + \int_c^x f'(t) ,dt$ for all $x$. – PhoemueX Jul 23 '14 at 09:40

1 Answers1

1

Sadly, I did not find any book in which specifically this result is mentioned, so I will give a proof sketch below.

You might find that you already know some of the facts I use. In this case, skim. Otherwise, I have provided some links where you might find more information on the individual steps.

  1. We start by approximating $f\in W^{1,1}\left(\left(0,1\right)\right)$ (the same works for any open interval of finite length) locally with smooth functions.

  2. Now, for $\varepsilon>0$, let $\varphi_{\varepsilon}:=\frac{1}{\varepsilon}\cdot\varphi\left(\frac{x}{\varepsilon}\right)$. Then $\varphi_{\varepsilon}\in C_{c}^{\infty}\left(\left(-\varepsilon,\varepsilon\right)\right)$, $\int_{\mathbb{R}}\varphi_{\varepsilon}\, dx=1$ and for every $g\in L^{1}\left(\mathbb{R}\right)$, we have $\varphi_{\varepsilon}\ast g\xrightarrow[\varepsilon\downarrow0]{L^{1}\left(\mathbb{R}\right)}g$, where the so-called convolution $F_{\varepsilon}:=\varphi_{\varepsilon}\ast g$ of $\varphi_{\varepsilon}$ and $g$ is given by $$ \left(\varphi_{\varepsilon}\ast g\right)\left(x\right)=\int_{\mathbb{R}}\varphi_{\varepsilon}\left(x-y\right)g\left(y\right)\, dy. $$ On convolution in general, see also here Why convolution regularize functions? and here https://mathoverflow.net/questions/5892/what-is-convolution-intuitively

    • To show this, we use that $\left\Vert g-L_{x}g\right\Vert _{1}\xrightarrow[x\rightarrow0]{}0$ for every $g\in L^{1}\left(\mathbb{R}\right)$, where $\left(L_{x}g\right)\left(y\right)=g\left(y-x\right)$. This is shown for example here proof that translation of a function converges to function in $L^1$
    • We then have \begin{eqnarray*} \left\Vert \left(\varphi_{\varepsilon}\ast g\right)-g\right\Vert _{1} & = & \int_{\mathbb{R}}\left|g\left(x\right)-\int_{\mathbb{R}}\varphi_{\varepsilon}\left(x-y\right)\cdot g\left(y\right)\, dy\right|\, dx\\ & \overset{\left(\ast\right)}{\leq} & \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi_{\varepsilon}\left(x-y\right)\cdot\left|g\left(x\right)-g\left(y\right)\right|\, dy\, dx\\ & \overset{\text{Fubini}}{=} & \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi_{\varepsilon}\left(x-y\right)\cdot\left|g\left(x\right)-g\left(y\right)\right|\, dx\, dy\\ & \overset{z=x-y}{=} & \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi_{\varepsilon}\left(z\right)\cdot\left|g\left(z+y\right)-g\left(y\right)\right|\, dz\, dy\\ & \overset{\text{Fubini}}{=} & \int_{\mathbb{R}}\varphi_{\varepsilon}\left(z\right)\cdot\left\Vert g-T_{-z}g\right\Vert _{1}\, dz\\ & \overset{\left(\ast\ast\right)}{\leq} & \sup_{\left|z\right|\leq\varepsilon}\left\Vert g-T_{-z}g\right\Vert _{1}\xrightarrow[\varepsilon\downarrow0]{}0. \end{eqnarray*} Here, we used $\int_{\mathbb{R}}\varphi_{\varepsilon}\left(x-y\right)\, dy=1$ at the step marked with $\left(\ast\right)$ and $\int_{\mathbb{R}}\varphi_{\varepsilon}\left(z\right)\, dz=1$ as well as ${\rm supp}\left(\varphi_{\varepsilon}\right)\subset\left(-\varepsilon,\varepsilon\right)$ in the step marked with $\left(\ast\ast\right)$.
  3. By standard results on differentiation under the integral sign, we see that $\varphi_{\varepsilon}\ast g$ is $C^{1}$ (actually even $C^{\infty}$) with derivative $$ \left(\varphi_{\varepsilon}\ast g\right)'\left(x\right)=\int_{\mathbb{R}}\varphi_{\varepsilon}'\left(x-y\right)\cdot g\left(y\right)\, dy=\left(\left(\varphi_{\varepsilon}'\right)\ast g\right)\left(x\right). $$ This is for example discussed here Differentiability of Convolutions

  4. Now comes the interesting part, because we actually want to have $$\left(\varphi_{\varepsilon}\ast f\right)'\left(x\right)=\left(\varphi_{\varepsilon}\ast\left(f'\right)\right)\left(x\right)$$ for $f\in W^{1,1}\left(\left(0,1\right)\right)$. We will show that this is indeed true for $x\in\left(\varepsilon,1-\varepsilon\right)$. To see this, note that we have $\gamma_{x,\varepsilon}\in C_{c}^{\infty}\left(\left(0,1\right)\right)$ for $$ \gamma_{x,\varepsilon}\left(y\right):=\varphi_{\varepsilon}\left(x-y\right), $$ as $y\in{\rm supp}\left(\gamma_{x,\varepsilon}\right)$ implies $x-y\in{\rm supp}\left(\varphi_{\varepsilon}\right)\subset\left(-\varepsilon,\varepsilon\right)$ and thus $y\in B_{\varepsilon}\left(x\right)\subset\left(0,1\right)$ by our choice of $x$.

    Using the definition of the weak derivative (at $\left(\ast\right)$), we thus see \begin{eqnarray*} \left(\varphi_{\varepsilon}\ast f\right)'\left(x\right) & = & \int_{\mathbb{R}}\varphi_{\varepsilon}'\left(x-y\right)\cdot f\left(y\right)\, dy\\ & = & -\int_{\mathbb{R}}f\left(y\right)\cdot\gamma_{x,\varepsilon}'\left(y\right)\, dy\\ & \overset{\left(\ast\right)}{=} & \int_{\mathbb{R}}f'\left(y\right)\cdot\gamma_{x,\varepsilon}\left(y\right)\, dy\\ & = & \left(\varphi_{\varepsilon}\ast\left(f'\right)\right)\left(x\right). \end{eqnarray*}

  5. We know $\varphi_{\varepsilon}\ast\left(f'\right)\xrightarrow[\varepsilon\downarrow0]{L^{1}\left(\mathbb{R}\right)}f'$, where $f'$ is extended onto all of $\mathbb{R}$ (by zero). Using $\left(\varphi_{\varepsilon}\ast f\right)'=\varphi_{\varepsilon}\ast\left(f'\right)$ on $\left(\varepsilon,1-\varepsilon\right)$, we get $$ \left(\varphi_{\varepsilon}\ast f\right)'\xrightarrow[\varepsilon\downarrow0]{L^{1}\left(\left(\delta,1-\delta\right)\right)}f' $$ for any $\delta\in\left(0,\frac{1}{2}\right)$. But for $x\in\left(\delta,1-\delta\right)$, we have $$ F_{\varepsilon}\left(x\right)-F_{\varepsilon}\left(\frac{1}{2}\right)=\int_{1/2}^{x}F_{\varepsilon}'\left(t\right)\, dt=\int_{1/2}^{x}\left(\varphi_{\varepsilon}\ast\left(f'\right)\right)\left(t\right)\, dt\qquad\left(\dagger\right) $$ and $$ \left|\int_{1/2}^{x}\left(\varphi_{\varepsilon}\ast\left(f'\right)\right)\left(t\right)\, dt-\int_{1/2}^{x}\left(\varphi_{\theta}\ast\left(f'\right)\right)\left(t\right)\, dt\right|\leq\int_{\delta}^{1-\delta}\left|\left(\varphi_{\varepsilon}\ast f'\right)\left(t\right)-\left(\varphi_{\theta}\ast f'\right)\left(t\right)\right|\, dt\xrightarrow[\varepsilon,\theta\downarrow0]{}0, $$ where we note that the expression before the limit is independent of $x\in\left(\delta,1-\delta\right)$.

    This shows that the right-hand side of $\left(\dagger\right)$ is uniformly Cauchy, so that $F_{\varepsilon}\xrightarrow[\varepsilon\downarrow0]{}F$ uniformly on $\left(\delta,1-\delta\right)$ for some (necessarily continuous) function $F\in C^{0}\left(\left(\delta,1-\delta\right)\right)$. But we also know $F_{\varepsilon}\xrightarrow[\varepsilon\downarrow0]{L^{1}\left(\left(\delta,1-\delta\right)\right)}f$, which together implies (why?) $f=F$ almost everywhere.

    EDIT: As Harold pointed out in a comment, the uniform Cauchy property of $(\dagger)$ is in general not sufficient to get convergence. But in the current setting, we can fix this as follows: We know $F_\varepsilon \to f$ in $L^1$, so there is some sequence $\varepsilon_n \to 0$ with $F_{\varepsilon_n} \to f$ almost everywhere. Fix some $x_0 \in (\delta, 1-\delta)$ with $F_{\varepsilon_n}(x_0) \to f(x_0)$. We now have $$F_{\varepsilon_{n}}(x)=F_{\varepsilon_{n}}(x)-F_{\varepsilon_{n}}(1/2)-\left(F_{\varepsilon_{n}}(x_{0})-F_{\varepsilon_{n}}(1/2)\right)+F_{\varepsilon_{n}}(x_{0}),$$ where the right hand side converges uniformly by $(\dagger)$ and because $F_{\varepsilon_n}(x_0)$ converges. Now replace all limits $\varepsilon \to 0$ by "going to zero along the sequence $(\varepsilon_n)_n$".

  6. We conclude that $F$ is a continuous version of $f$ on $\left(\delta,1-\delta\right)$ with $$ F\left(x\right)=\lim_{\varepsilon\downarrow0}F_{\varepsilon}\left(x\right)=\lim_{\varepsilon\downarrow0}F_{\varepsilon}\left(\frac{1}{2}\right)+\int_{1/2}^{x}F_{\varepsilon}'\left(t\right)\, dt=F\left(\frac{1}{2}\right)+\int_{1/2}^{x}f'\left(t\right)\, dt. $$ As $\delta\in\left(0,\frac{1}{2}\right)$ was arbitrary, we see $$ f\left(x\right)=F\left(\frac{1}{2}\right)+\int_{1/2}^{x}f'\left(t\right)\, dt\qquad\left(\ddagger\right) $$ for almost every $x\in\left(0,1\right)$.

  7. If $f$ is now two times weakly differentiable, the above argument shows that $f'$ has a continuous representative, so that the right hand side of $\left(\ddagger\right)$ is actually a $C^{1}$-function and thus a $C^{1}$-version of $f$.
PhoemueX
  • 36,211
  • I do follow this argument, thanks! This does however not show the original statement that "every weakly differentiable function is absolutely continuous". I do not believe this fact to be true. consider the function sin(1/x) on (0,1). The function is classically differentiable on (0,1) and hence weakly differentiable, but is certainly not absolutely continuous on (0,1). – Harold Jul 24 '14 at 07:27
  • That depends on your interpretation of "weakly differentiable". I normally interpret that as stating that the weak (distributional) derivative is in $L^1$. In that case, step 6 in the above shows $f(x) = f(1/2) + \int_{1/2}^x f'(t)dt$ for all $x$ (for the continuous representative), so that $f$ is absolutely continuous. If you only interpret it as saying that the weak derivative is in $L^1_{loc}$, then you only have that $f$ is locally absolutely continuous (as for $\sin(1/x)$). The proof is basically the same as above, step 6 (or cut off $f$ to $f\varphi$ with $\varphi \in C_c^\infty$). – PhoemueX Jul 24 '14 at 08:16
  • I need to consider it in the local sense. Am I correct in thinking that even in if one views it in the local sense, we still have $f$ having a $c^1$ representation, but not being absolutely continuous? – Harold Jul 24 '14 at 08:36
  • Yes, but then you only know that $f$ is continuous on $(a,b)$ (after all, this is a local statement, you can just restrict to $(c,d)$ for $a<c<d<b$ and apply the above argument). In the non-local case, you can even conclude that $f$ is uniformly continuous and thus extends continuously to $[a,b]$. – PhoemueX Jul 24 '14 at 08:51