3

I am reading "The Theory of Analytic Functions" from "Mathematics For Physicists" by Dennery and Krzywicki and I have confusions at some lines of this theorem in page 15,which goes by this:

Theorem. Let the real and imaginary parts $u(x,y)$ and $v(x,y)$ of a function of a complex variable $f(z)$ obey Cauchy-Riemann equations and possess continuous first partial derivatives with respect to the variables $x$ and $y$ at all points of some region of the complex plane.Then $f(z)$ is differentiable throughout this region.

Proof. Since $u(x,y)$ and $v(x,y)$ have continuous first partial derivatives,there exist four positive numbers $\epsilon_1$, $\epsilon_2$, $\delta_1$, $\delta_2$ which can be made arbitrarily small as $\Delta$$x$ and $\Delta$$y$ tends to zero, and such that

\begin{cases}u(x+\Delta x,y+\Delta y) - u(x,y)=\frac{\partial u}{\partial x}\Delta x +\frac{\partial u}{\partial y}\Delta y + \epsilon_1\Delta x+\delta_1\Delta y\\ v(x+\Delta x,u+\Delta y) -v(x,y)=\frac{\partial v}{\partial x}\Delta x+\frac{\partial v}{\partial y}\Delta y+\epsilon_2\Delta x+\delta_2\Delta y \tag{4.8} \end{cases}

Using relations $(4.8)$, we easily deduce

$$\left|\frac{f(z+\Delta z)-f(z)}{\Delta z}-(\frac{\partial u}{\partial x}+i\frac{\partial v}{\partial x})\right|\leq\left|\frac{\Delta x}{\Delta z}(\epsilon_1+i\epsilon_2)\right|+\left|\frac{\Delta y}{\Delta z}(\delta_1+i\delta_2)\right| \tag{4.9}$$

But since,

$$\left|\frac{\Delta x}{\Delta z}\right|=\frac{\Delta x}{[(\Delta x)^\frac{1}{2}+(\Delta y)^\frac{1}{2}]}\leq 1$$ and $$\left|\frac{\Delta y}{\Delta z}\right|=\frac{\Delta y}{[(\Delta x)^\frac{1}{2}+(\Delta y)^\frac{1}{2}]}\leq 1$$

We obtain from $(4.9)$, on taking the limit $\Delta z\to 0$,

$$\frac{df}{dz}=\frac{\partial u}{\partial x}+i\frac{\partial v}{\partial x}\tag{4.10}$$ which shows that $f(z)$ is differentiable.

Now I have a number of questions:

a) How do the terms $\epsilon_1$,$\epsilon_2$,$\delta_1$ and $\delta_2$ appear in $(4.8)$?I understand how the first partial derivatives appear i.e. from the definition of partial derivatives but I can not understand how the $\epsilon$'s and the $\delta$'s appear.

b)I understand the second equation in $(4.8)$ is multiplied by i and is added to the first equation of $(4.8)$ which gives $f(z+\Delta z) - f(z)$ and then it has been divided by $\Delta z=\Delta x+i\Delta y$ but then how the first partial derivatives of $u(x,y)$ and $v(x,y)$ with respect to u vanish? Is it something like we are setting $\Delta y=0$ fixed such that $\Delta z$ becomes $\Delta x$? I understand triangle inequality is used after this step.

c)The inequalities following $(4.9)$ make complete sense, but how does taking the limit $\Delta z\to 0$ produce $(4.10)$?Like does somehow makes the R.H.S. of $(4.9)$ vanish,so that the modulus at the L.H.S. have no choice other than becoming zero and thus the $(4.10)$?

Am I thinking in the right direction, any guidance will be appreciated?(I have done coursework on Multivariable Calculus, Linear Algebra-I, Vector Spaces, Tensor Calculus)

2 Answers2

1

Unsurprisingly, this isn't the most detailed/rigorous explanation.

$(4.8)$ can be derived in the given form in one way, or a very similar (and better, in my view) thing can be derived. We can use the mean value theorem from one-dimensional real analysis to infer: $$u(x+\Delta x,y+\Delta y)-u(x,y)\\=u(x+\Delta x,y+\Delta y)-u(x,y+\Delta y)+u(x,y+\Delta y)-u(x,y)\\=\partial_1u(\theta,y+\Delta y)\cdot\Delta x+\partial_2u(x,\vartheta)\cdot\Delta y\\=\partial_1u(x,y)\cdot\Delta x+\partial_2u(x,y)\cdot\Delta y+[\partial_1u(\theta,y+\Delta y)-\partial_1u(x,y)]\cdot\Delta x+[\partial_2u(x,\vartheta)-\partial_2u(x,y)]\cdot\Delta y\\=\frac{\partial u}{\partial x}\cdot\Delta x+\frac{\partial u}{\partial y}\cdot\Delta y+\epsilon_1\Delta x+\delta_1\Delta y$$Where $\theta$ lies between $x$ and $x+\Delta x$ and $\vartheta$ lies between $y$ and $y+\Delta y$, and both are (very possibly non-continuous, random, horrible) functions of $x,y,\Delta x,\Delta y$. Now, the (joint) continuity of both partial derivatives ensures the quantities $\epsilon_1$ and $\delta_1$, defined as those differences in square brackets $[]$, tend to zero (for fixed $x,y$ and in the joint limit $(\Delta x,\Delta y)\to(0,0)$).

Alternatively, instead of the mean value theorem we use the more general mean value inequality and acknowledge that having continuous partial derivatives implies we have a continuous total derivative, and we could write $(4.8)$ as $u(x+\Delta x,y+\Delta y)-u(x,y)=\frac{\partial u}{\partial x}\Delta x+\frac{\partial u}{\partial y}\Delta y+o(|\Delta z|)$ immediately, by definition of total derivative, and this would make the resulting derivation clearer in my opinion and avoids the need for weird choice functions. The specific form of $\epsilon_1,\delta_1$ is not important; the $o(|\Delta z|)$ form of the error is important.

For $b)$, I don't fully understand what you're saying but we use the Cauchy-Riemann equations secretly, here. As $\partial v/\partial y=\partial u/\partial x$, $\frac{\partial u}{\partial x}\Delta x+i\cdot\frac{\partial v}{\partial y}\Delta y=\frac{\partial u}{\partial x}\cdot\Delta z$, etc., and we use this to from $4.8$ to $4.9$.

As for $c)$, it's just because the error term in $4.9$ goes to zero. Those bounds $|\Delta x,y/\Delta z|\le1$ allow you to bound the error as less than $|\epsilon_1+i\epsilon_2|+|\delta_1+i\delta_2|$ and since you know all four of those terms tend to zero, the whole thing tends to zero, as $\Delta z\to0$. In my preferred formulation, this is just because the error is $o(|\Delta z|)$ and $o(|\Delta z|)/|\Delta z|\to0$ by definition.

FShrike
  • 46,840
  • 3
  • 35
  • 94
1

Question a):

The following is well-known from multivariable calculus: If the functions $u , v : \mathbb R^2 \to \mathbb R$ have continuous first partial derivatives, then they are differentiable.

Now look at Why is differentiability defined on multivariable functions this way? This gives an equivalent characaterization of differentiability for multivariable functions (in my answer to this question you can find a proof that it is equivalent to the standard definition).

This gives you functions (not numbers) $\epsilon_1, \delta_1, \epsilon_2, \delta_2$ as in $(4.8)$. These functions depend on the four real variables $x,y, \Delta x, \Delta y$ and have the property that $\lim_{(\Delta x, \Delta y) \to (0,0)}\epsilon_1(x,y,\Delta x, \Delta y) = 0$ etc.

Question b):

The assumption in the theorem is that the Cauchy-Riemann equations are satisfied. Thus $$\left(\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}\right)(\Delta x + i \Delta y) = \frac{\partial u}{\partial x} \Delta x - \frac{\partial v}{\partial x} \Delta y + i \frac{\partial u}{\partial x} \Delta y + i \frac{\partial v}{\partial x} \Delta x \\= \frac{\partial u}{\partial x} \Delta x +\frac{\partial u}{\partial y} \Delta y + i \frac{\partial v}{\partial y} \Delta y + i\frac{\partial v}{\partial x} \Delta x .$$ This produces $(4.9)$.

Question c):

$\left|\frac{\Delta x}{\Delta z}\right|$ and $\left|\frac{\Delta y}{\Delta z}\right|$ are bounded by $1$ and $\epsilon_k(z, \Delta z) = \epsilon_k(x,y, \Delta x,\Delta y) \to 0, \delta_k(z, \Delta z) = \delta_k(x,y, \Delta x,\Delta y) \to 0$ as $\Delta z = (\Delta x, \Delta y) \to (0,0)$.

This shows that $\lim_{\Delta z \to 0} \frac{df}{dz}(z)$ exists and has the value $\frac{\partial u}{\partial x}(z) + i \frac{\partial v}{\partial x}(z)$.

Paul Frost
  • 87,968