1

I am trying to understand the conformal covariance of Liouville measure and have been following this lecture notes. In page 30, under "informal proof", the author wrote:

When we use the map $f$, a small circle of radius $\varepsilon$ is mapped approximately into a small circle of radius $\varepsilon' = |f'(z)|\varepsilon$ around $f(z)$. So $e^{\gamma h_ε(z)} \varepsilon^{\frac{\gamma^2}2}dz$ approximately corresponds to $$ e^{\gamma h'_{|f'(z)|\varepsilon}(z')} \varepsilon^{\frac{\gamma^2}2}\frac{dz'}{|f'(z)|^2} $$ by the usual change of variable formula.

For context, $f: D\to D'$ is a conformal map and $h_\varepsilon(z)$ is the circle average (around point $z$ with radius $\varepsilon$) of a Gaussian free field.

My question is slightly more fundamental than all these details: Where did the $|f'(z)|^2$ come from? I understand it probably has something to do with Jacobian, but I have no idea why is there a need to raise to power of $2$. Any hint?

Tham
  • 774
  • 3
  • 9
  • Nice to see some GFF questions finally, I believe this just comes from the change of coordinates for the conformal map, since the square is going to give you the sum over the partial derivates coming from the determinant. – a.s. graduate student Nov 16 '22 at 11:18
  • Yea, GFF is pretty interesting. Oh man! I feel so stupid for asking this question now.... Looks like it's time for me to redo multi-variable calculus again. – Tham Nov 16 '22 at 15:40

1 Answers1

2

This all basically all boils down to some complex analysis, namely the Cauchy-Riemann relations and the determinant.

Recall that $f(z) = f(x+iy) = u(x,y) + iv(x,y)$ then using the Cauchy Riemann relations we know that the partial derivatives satiesfy the following relationsship:

$$ \partial_x u = \partial_y v \quad\text{ and }\quad\partial_y u = -\partial_x v \quad (1) $$ Plugging this into the Jacobian of $f$ we can formaly compute the determinant as follows $$D(f(x+iy))=\bigg| \begin{pmatrix} u_x & u_y \\ v_x & v_y \\ \end{pmatrix} \bigg| \stackrel{(1)}{=} \bigg| \begin{pmatrix} u_x & u_y \\ -u_y & u_x \end{pmatrix} \bigg| = u^2_x(x,y) + u^2_y(x,y). $$ Now secondly one needs the Wirtinger derivatives that are given by $$ \partial_z = \frac{1}{2}(\partial_x - i \partial_y) \quad (2), $$ if this seems unreasonable or foreign for some intuition you can check out What is the intuition behind the Wirtinger derivatives?, note that they are basically used to rewrite the ''real'' derivates into something connected to something complex, where one can prove that $\partial_{\overline{z}} f(z) = 0$ iff the C.R. equations hold.

Now applying (2) to our function $f$ we get $$ \partial_z f(z) = \frac{1}{2} (u_x + i v_x - i(u_y + i v_y)) = u_x - i u_y. $$ Hence we conclude that $$ |f^{\prime}(z)|^2 = u^2_x(x,y) + u^2_y(x,y) = D(f(x+iy)) \quad (*) . $$ Now I am not 100% sure however usually for a $C^1$-diffeomorphism $f : U \to V$ one has under the pushforward (if all preliminaries are satiesfied): $$ \int_U \varphi(x) d\lambda(x) = \int_V |D_{\varphi}(\varphi^{-1}(y))|^{-1} d\lambda(y) = \int_{V} |J_{\varphi^{-1}}(y)| d\lambda(y), $$ where in the your equation above he choose the second version so using (*) gives the desired equation.