41

I'm interested in the potential of such a technique. I got the idea from Moron's answer to this question, which uses the technique of differentiation under the integral.

Now, I'd like to consider this integral:

$$\int_{-\pi}^\pi \cos{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$

I'd like to differentiate with respect to y. This will give the integral:

$$\int_{-\pi}^\pi -(1-e^{i\cdot n \cdot t})(\sin{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$

...If I'm correct. Anyways, I'm interested in obtaining the results to this second integral, using this technique. So I'm wondering if solving the first integral can help give results for the second integral. I'm thinking of setting $y=1$ in the second integral. This should eliminate $y$ from the result, and give me the integral involving $x$.

The trouble is, I'm not sure I can use the technique of differentiation under the integral. I want to know how I can apply this technique to the integrals above. Any pointers are appreciated.

For instance, for what values of $y$ is this valid?

Matt Groff
  • 5,749
  • 3
    Wikipedia has a nice article about this. http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign You may differentiate under the integral sign provided your function is nice enough (in this case; continious function + continous derivative) – Fredrik Meyer Dec 03 '10 at 14:16
  • @Fredrik: Wikipedia's statement covers the Riemann integral case, but the statement I give below is for an integral with respect to an arbitrary measure (including the Lebesgue integral); in particular it covers the case of differentiating a series term-by-term. – Qiaochu Yuan Dec 03 '10 at 14:35
  • 2
    Richard Feynman has remarks on solving problems this way in some of his books. As a physicist, all his functions were well behaved. – Ross Millikan Dec 03 '10 at 16:50

3 Answers3

56

Wikipedia doesn't seem to have a precise statement of this theorem. Here's a very general statement.

Theorem (Differentiation under the integral sign): Let $U$ be an open subset of $\mathbb{R}$ and let $E$ be a measure space (which you can freely take to be any open subset of $\mathbb{R}^n$ if you want). Let $f : U \times E \to \mathbb{R}$ have the following properties:

  • $x \mapsto f(t, x)$ is integrable for all $t$,
  • $t \mapsto f(t, x)$ is differentiable for all $x$,
  • for some integrable function $g$, for all $x \in E$, and for all $t \in U$,

$$\left| \frac{\partial f}{\partial t}(t, x) \right| \le g(x).$$

Then the function $x \mapsto \frac{\partial f}{\partial t}(t, x)$ is integrable for all $t$. Moreover, the function $F : U \to \mathbb{R}$ defined by

$$F(t) = \int_E f(t, x) \mu(dx)$$

is differentiable, and

$$F'(t) = \int_E \frac{\partial f}{\partial t}(t, x) \mu(dx).$$

In practice the only condition that isn't easily satisfiable is the third one. It is satisfied in this case, so you're fine (for all $y$).

Qiaochu Yuan
  • 468,795
  • I'm wondering, additionally, if I can easily add a third variable to this. The idea is that I'd like to differentiate twice under the integral; once with respect to one variable, and then once with respect to a different variable. I'm wondering if any additional complications arise. (I'm debating on whether I should ask this in a seperate question) – Matt Groff Dec 03 '10 at 17:29
  • 2
    Yes; apply the theorem twice. (in the first application E will be an open subset of R^2 instead of an open subset of R.) – Qiaochu Yuan Dec 03 '10 at 17:36
  • How is this result proven? – Brian Bi Apr 20 '14 at 22:08
  • 2
    @BrianBri: You basically use the definition of the derivative, then convert the limit $(F(t+h)-F(t))/h$ to a sequential limit $(F(t+h_n)-F(t))/h_n$, use the linearity of the integral and apply the dominated convergence theorem. One thing I find important is that often one can not find a function $g$ as above. But most of the time, this is only due to the fact that $U$ is "too large"; consider e.g. the (trivial) example $f(0,\infty)\times\Bbb{N}, (y,n)\mapsto 1/(y \cdot n^2)$. But because differentiability is a local property, we can shrink $U$ (e.g. to (\epsilon,\infty) above). – PhoemueX Dec 20 '14 at 09:35
  • 1
    The theorem given in the German Wikipedia (see Yaroslav's answer below) requires $f$ to be continuously differentiable which is because, I assume, they use the mean-value theorem to be able to apply the dominated convergence theorem and taking the limit of the derivative inside the integral then requires continuity. Is the theorem above really true if $f'$ is not necessarily continuous? – balu Feb 03 '16 at 20:54
  • @balu: I haven't thought about it since writing this answer, but I would not be surprised if you only need the third domination condition. – Qiaochu Yuan Feb 04 '16 at 20:22
  • 1
    You mean, one might not need continuity of the derivative due to the domination condition? – balu Feb 05 '16 at 06:00
  • 1
    @balu: This is perhaps too old for a comment, but I think it should be in the record. No, you don't need continuity of partial derivative. Notice that the mean value theorem does not require continuity of derivative, only its existence. – Mittens May 26 '23 at 15:30
  • Hello @QiaochuYuan. Do you know if the theorem holds if the inequality $$\left| \frac{\partial f}{\partial t}(t, x) \right| \le g(x),$$only holds $\mu$-a.e.? – psie Jul 05 '24 at 13:36
5

The general theorem wroten above by Qiaochu Yuan is formulated in the German and French Wikipedias with proofs.

They also give links to some literature, but the French book I wasn't able to find, while in the German books I havent found the statement in its generality.

  • 1
    A theorem formulation in a foreign language may not be the best for the reference, but if one doesn't know the language, I've realised that even a word-by-word translation may be easier than proving the theorem yourself. – Yaroslav Nikitenko Oct 24 '14 at 15:15
1

This is rather a comment on this old posting on a very well known and useful result. The result stated by Qiaochu Yuan is perhaps one that has the weakest conditions. As as he mentioned in his posting, it is the third condition that requires the most effort to verify.

There are other versions of this result with stronger conditions that in many instances are easier to check:

Proposition: Suppose $(\Omega,\mathscr{F},\mu)$ is a $\sigma$-finite measure space. Let $I$ be a finite interval in $\mathbb{R}$ and $F:I\times\Omega\rightarrow\mathbb{R}$ be a function that satisfies

  1. For each $t\in I$, $\omega\mapsto F(t,\omega)\in L_1(\mu)$.
  2. For each $\omega\in\Omega$, $F$ is continuously differentiable with respect to $t$, i.e., $s\mapsto\frac{\partial F}{\partial t}(s,\omega)$ is continuous.
  3. The map $(s,\omega)\mapsto \frac{\partial F}{\partial t}(s,\omega)\in L_1(I\times\Omega,\lambda_1\otimes\mu)$ where $\lambda_1$ is Lebesgue's measure on $I$.
  4. The function $v(s):=\int_\Omega\frac{\partial F}{\partial t}(t,\omega)\,\nu(d\omega)$ is continuous at some point $t_0\in I$.

Then, $f(t):=\int_\Omega F(t,\omega)\,\mu(d\omega)$ is differentiable at $t=t_0$, and $f'(t)=\int_\Omega\frac{\partial F}{\partial t}(t_0,\omega)\,\nu(d\omega)$.

The assumptions listed above are more restrictive than the ones in the Theorem in Yuan (idem), but at times they may be easier to verify.

Here is a short proof of the Proposition: Condition 2 and the fundamental theorem of Calculus gives $$F(t_0-h,\omega)-F(t_0+h,\omega)=\int^{t_0+h}_{t_0-h}\frac{\partial F}{\partial t}(s,\omega)\,ds$$ Condition 3 and Fubini-Tonelli's theorem implies that $v(s):=\int_\Omega\frac{\partial F}{\partial t}(s,\omega)\,\mu(d\omega)$ is well defined for $s\in I$, and that \begin{align} \frac{f(t_0+h)-f(t_0-h)}{2h}&=\frac{1}{2h}\int_\Omega\int^{t_0+h}_{t_0-h}\frac{\partial F}{\partial t}(s,\omega)\,ds\,\mu(d\omega)\\ &=\frac{1}{2h}\int^{t_0+h}_{t_0-h}\int_\Omega\frac{\partial F}{\partial t}(s,\omega)\,ds\,\mu(d\omega)=\frac{1}{2h}\int^{t_0+h}_{t_0-h}v(s)\,ds \end{align} Condition 4 and the fundamental Theorem of calculus implies that $f'(x_0)$ exists and $$f'(x_0)=v(t_0)=\int_\Omega\frac{\partial F}{\partial t}(t_0,\omega)\,\mu(d\omega)$$


Example 1: For the problem in the OP, i.e. $f(y)=\int^{\pi}_{-\pi}\cos\big(y(1-e^{int})\big)\,dt$, the conditions in the proposition I stated here are easy to verify since all functions involve are continuous and the domain of integration is bounded.

Example 2: Consider $$f(s):=\int^\infty_0 e^{-t\omega}\frac{\sin \omega}{\omega}\,d\omega, \qquad t>0$$ Here $F(t,\omega)=e^{-t\omega}\frac{\sin \omega}{\omega}$, and $\frac{\partial F}{\partial t}F(s,\omega)=-e^{-s\omega}\sin \omega$. Conditions 1 and 2 are obvious. For condition 3, notice that for any finite open interval $(a, b)\subset(0,\infty)$ $$\int^b_a\int^\infty_0\Big|\frac{\partial F}{\partial t}(s,\omega)\Big|\,d\omega\,ds\leq\int^b_a\frac{ds}{s}=\log(b/a)<\infty $$ Condition 4 follows by dominated convergence for $\Big|\frac{\partial F}{\partial t}(s,\omega)\Big| \leq e^{-s\omega}\leq e^{-a\omega}$ whenever $0<a<s<b$. Hence, foray $s>0$ $$f'(s)=\int^\infty_0e^{-s\omega}\sin\omega\,d\omega$$

To use the Theorem stated Yuan's posting only the condition in the third bullet requires some verification. This my not be hard but requires some skill: Fix $s_0>0$ and choose $0<\delta<<s_0$. For $|h|<\delta_0$ $$\frac{f(s_0+h)-f(s_)}{h}=\int^\infty_0\frac{e^{-h\omega}-1}{h}e^{-s\omega}\frac{\sin\omega}{\omega}\,d\omega$$ Using Taylor expansion and the convexity of the map $t\mapsto e^{-t}$ we obtain that $$\Big|\frac{e^{-h\omega}-1}{h}\Big|\leq\frac{e^{|h|\omega}-1}{|h|}\leq\frac{e^{\delta\omega}-1}{\delta}\leq\frac{e^{\delta\omega}+e^{-\delta\omega}}{\delta}$$ An application of dominated convergence yields the desired result.


Final comment: The Theorem in Yuan's, which possibly has the weakest conditions in which Lebesgue integration can be used, requires some additional ingenuity to get a function $g$ dominates differences of the for $\Big|\frac{F(t+h,\omega)-F(t,\omega)}{h}\Big|\leq g(\omega) $ uniformly in $h$. The proposition in my posting requires in principle much stronger conditions but may be require less effort.

Mittens
  • 46,352