39

I have been wondering whether the following limit is being used somehow, as a variation of the derivative:

$$\lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h} .$$

Edit: I know that this limit is defined in some places where the derivative is not defined, but it gives us some useful information.

The question is not whether this limit is similar to the derivative, but whether it is useful somehow.

Thanks.

6 Answers6

24

The "symmetric difference" form of the derivative is quite convenient for the purposes of numerical computation; to wit, note that the symmetric difference can be expanded in this way:

$$D_h f(x)=\frac{f(x+h)-f(x-h)}{2h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}h^2+\frac{f^{(5)}(x)}{5!}h^4+\dots$$

and one thing that should be noted here is that in this series expansion, only even powers of $h$ show up.

Consider the corresponding expansion when $h$ is halved:

$$D_{h/2} f(x)=\frac{f(x+h/2)-f(x-h/2)}{h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}\left(\frac{h}{2}\right)^2+\frac{f^{(5)}(x)}{5!}\left(\frac{h}{2}\right)^4+\dots$$

One could take a particular linear combination of this half-$h$ expansion and the previous expansion in $h$ such that the term with $h^2$ zeroes out:

$$4D_{h/2} f(x)-D_h f(x)=3f^\prime(x)-\frac{f^{(5)}(x)}{160}h^4+\dots$$

and we have after a division by $3$:

$$\frac{4D_{h/2} f(x)-D_h f(x)}{3}=f^\prime(x)-\frac{f^{(5)}(x)}{480}h^4+\dots$$

Note that the surviving terms after $f^\prime(x)$ are (supposed to be) much smaller than either of the terms after $f^\prime(x)$ in the expansions for $D_h f(x)$ and $D_{h/2} f(x)$. Numerically speaking, one could obtain a slightly more accurate estimate of the derivative by evaluating the symmetric difference at a certain (well-chosen) step size $h$ and at half of the given $h$, and computing the linear combination $\dfrac{4D_{h/2} f(x)-D_h f(x)}{3}$. (This is akin to deriving Simpson's rule from the trapezoidal rule). The procedure generalizes, as one keeps taking appropriate linear combinations of a symmetric difference for some $h$ and the symmetric difference at half $h$ to zero out successive powers of $h^2$; this is the famous Richardson extrapolation.

  • J.M., Is there any specific advantage in preferring this symmetric difference form of the derivative over (say) the usual asymmetric difference form? With the asymmetric form, can't one take such well-chosen step sizes to cancel out many terms in the power series expansion? – Srivatsan Sep 18 '11 at 22:51
  • Sure, you could use the asymmetric form for Richardson extrapolation, @Sri, but the convergence is slower in general, since all you can do for the asymmetric form is successively remove powers of $h$. For the symmetric difference, you get to successively remove powers of $h^2$. You might want to try looking at how to take a linear combination to remove the $h$ and $h^2$ terms for the asymmetric form. :) – J. M. ain't a mathematician Sep 18 '11 at 23:00
  • Ok, I will try that sometime for small number of terms :). Thanks! – Srivatsan Sep 18 '11 at 23:41
19

Lemma: Let $f$ be a convex function on an open interval $I$. For all $x \in I$, $$ g(x) = \lim_{h \to 0} \frac{f(x+h) - f(x-h)}{2h} $$ exists and $f(y) \geq f(x) + g(x) (y-x)$ for all $y \in I$.

In particular, $g$ is a subderivative of $f$.

cardinal
  • 7,530
13

$$ \begin{eqnarray*} \lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h} &=& \frac12 \lim_{h\to 0}\left(\frac{f(x+h)-f(x)}h+\frac{f(x)-f(x-h)}h\right) \\ &=& \frac12 (f'(x)+f'(x)) = f'(x) \end{eqnarray*} $$

Assuming, of course that $f$ is differentiable at $x$.

Asaf Karagila
  • 405,794
  • I have modified my question, I know that when the derivative exists this limit is equal to it, but can it be used when the derivative is not defined? – Shay Ben-Moshe Sep 18 '11 at 20:07
  • @Asaf How about f(x) = abs(x)? – Foo Bah Sep 19 '11 at 05:10
  • @Foo: I said that $f$ needs to be differentiable at $x$, assuming that you talk about $x=0$ then this condition does not hold. – Asaf Karagila Sep 19 '11 at 05:16
  • 4
    @AsafKaragila the whole point is that the symmetric derivative has a real meaning when the function is not differentiable at the point. You assumed away the essential point – Foo Bah Sep 19 '11 at 05:19
  • 1
    @Foo: If you consider the time of posting this answer and the time of editing the original question, you will see that the additional assumption that $f$ is not differentiable was not given. – Asaf Karagila Sep 19 '11 at 05:24
6

This cannot be used as a definition of the derivative. First the result is half the sum of the left and right derivatives at $x$, when these exist. Second the limit can be well defined even when the sided derivatives do not exist, consider for example $f(x)=|x|^a$ around $x=0$ for suitable values of $a$. More generally, the limit at $x$ exists and is $g'(x)$ as soon as $f=g+s$ with $g$ differentiable at $x$ and $s$ symmetric around $x$ in the sense that $s(x+z)=s(x-z)$ for every $|z|$ small enough hence this notion can be used to get rid of symmetric but badly behaved parts of $f$ around $x$.

Did
  • 284,245
3

If $f$ is allowed to be discontinuous we have this example:

$$ x \in \mathbb{Q} \implies \lim_{h \to 0} \frac{1_\mathbb{Q}(x+h) - 1_\mathbb{Q}(x-h)}{2h} = \lim_{h \to 0} \frac{0}{2h} = \lim_{h \to 0} 0 = 0.$$

That doesn't seem particularly useful to me.

kahen
  • 16,130
  • I don't understand what you wrote, what does $1_\mathbb{Q}$ mean? – Shay Ben-Moshe Sep 18 '11 at 20:27
  • It's the usual notation for the indicator function of the rationals – kahen Sep 18 '11 at 20:28
  • I see, well it gives us information actually. From the symmetry of $1_\mathbb{Q}$, it doesn't matter which value does the function get around $x=0$ the change in the function is zero, just like the limit says. – Shay Ben-Moshe Sep 18 '11 at 20:32
  • 3
    @anon, I do not get your point. Usual derivatives, one-sided or not, are not restricted to the rationals, are they? That a function as highly irregular as the indicator function of the rationals HAS a pseudo-derivative in this sense is an excellent hint that, in fine, the notion has little to do with actual derivation. – Did Sep 18 '11 at 20:46
  • @Didier: To clarify: in my head at least I was thinking of "usual derivatives" as defined by the limit definition, and the domain of the arguments as a potentially customizable feature in that definition. Here this is just the idea that a constant function is differentiable and its derivative is zero, which I don't think has any direct implications specifically for the side-mixed limit in the original question. – anon Sep 18 '11 at 22:29
  • @anon, your last comment made me wonder whether you realized that for this function $f$ and for any rational $x$, $f(x+h)-f(x-h)=0$ for every $h$ hence the pseudo-derivative exists, but $f(x+h)-f(x)=-1$ for some $h$ with $|h|$ as small as one wants hence the (true) derivative does not exist. So, no, the idea of this example is not just that the derivative of a constant function is zero. (And apparently you did not get the point of my answer either...) – Did Sep 19 '11 at 07:45
  • @Didier: Oh wow, that fact went right over my head. I didn't process that for some inexplicable reason. – anon Sep 19 '11 at 07:49
  • 1
    @kahen: Any additive function, that is a function $f$ such that $f(x+y)=f(x)+f(y)$ for all real numbers $x$ and $y$, has a symmetric derivative equal to $0$ at each point, and discontinuous additive functions are quite pathological (their graphs are dense in the plane, they are non-Lebesgue measurable in each open interval, etc.). For more about the relation between the symmetric derivative and the ordinary derivative, see http://groups.google.com/group/sci.math/msg/d58ce3669a91243a and http://mathforum.org/kb/message.jspa?messageID=5056119 – Dave L. Renfro Sep 19 '11 at 19:41
-6

This is a wrong way of thinking, the comment by Jesse Madnick explains why.

This is symmetric differentiation and is useful when both left and right hand limits exists, as oppose to usual derivative definition when only it is sufficient that right hand limit should exist.

The only place that I have seen that to be of use to make things nicer was in Fractional Calculus, providing some nice formulas to generalise differentiation.

To see why, try using the above limit definition of differentiation to get 2nd , or 3rd order definitions of a derivative, you should be able to recognise the resulting pascal/binomial like looking formulas, from there one might just try to generalise that to get a derivative definition that would say something about $\tfrac{1}{2}^{\text {th}}$ derivative.

In short : there is nothing more to this definition than the other usual one, except that this one shows an extra symbol (2), why use one more extra symbol when there is no need? maybe just to reap the benefits of it being symmetric.

Edit : Is somebody going to point the mistake? What am I missing here?

jimjim
  • 9,855
  • 6
    For the usual derivative to exist, it is certainly not enough that the "right hand limit" exists. – Did Sep 18 '11 at 21:00
  • 1
    Differentiability --> continuity --> LH Limit = RH Limit – The Chaz 2.0 Sep 18 '11 at 23:01
  • @Didier : but $\frac {f(x+\delta)-f(x)}{\delta}$ is only the left side, I have not seen that $\frac {f(x)-f(x-\delta)}{\delta}$ as a requiremnt, are we talking about the same thing? – jimjim Sep 19 '11 at 01:24
  • 7
    @Arjang: It is not assumed that $\delta > 0$. In other words, $\lim_{\delta \to 0} \frac{f(x+\delta) - f(x)}{\delta}$ is a two-sided limit. – Jesse Madnick Jul 15 '12 at 13:28