2

I am trying to compute an integral of a $d$-manifold $\mathcal{M}$, projected onto the unit (hyper-)sphere $S^d = \{\vec{\omega} \in \mathbb{R}^{d+1} \,:\, \|\vec{\omega}\|=1\}$, both embedded in $\mathbb{R}^{d+1}$. Let $\vec{x} : [0,1]^d \rightarrow \mathbb{R}^{d+1}$ be a parametrization of $\mathcal{M} = \{\vec{x}(\vec{t}) \in \mathbb{R}^{d+1} \,:\, \vec{t} \in [0,1]^d\}$ and let it be sufficiently smooth. Also let a function $f:S^d\rightarrow \mathbb{R}$ be given. The radial projection from $\mathcal{M}$ onto $S^d$ is:

$$P: \mathbb{R}^{d+1} \rightarrow \mathbb{R}^{d+1}, \,P(\vec{x}) = \frac{\vec{x}}{\|\vec{x}\|}$$

The integral then reads (taking into consideration Shifrin's comment, this is where I made a mistake):

$$\int_{S^d}f(\vec{\omega})\,d\vec{\omega} = \int_{S^d}f(P(\vec{x}))\,dP(\vec{x}) = \int_{\mathcal{M}}f(P(\vec{x}))\left|det[J_P(\vec{x})]\right|\,d\vec{x} \\ J_P(\vec{x}) = \frac{\partial P}{\partial \vec{x}}(\vec{x})$$

However, unsurprisingly the determinant of the Jacobian is zero, since it has a zero eigenvalue along $\vec{x}$:

$$J_{P}(\vec{x}) = \frac{1}{\|\vec{x}\|}\left(I- \frac{\vec{x}\vec{x}^T}{\|\vec{x}\|^2}\right)$$

I have seen elsewhere (e.g. wikipedia, the page on solid angle for $d=2$) that one gets a term $\frac{1}{\|x\|^2}\left(\frac{\vec{x}}{\|\vec{x}\|} \cdot n_{\vec{x}}\right)$ instead (where $n_{\vec{x}}$ is the normal of $\mathcal{M}$ at $\vec{x}$), i.e.:

$$\int_{S^d}f(\vec{\omega})\,d\vec{\omega} = \int_{\mathcal{M}}f(P(\vec{x}))\frac{1}{\|x\|^2}\left(\frac{\vec{x}}{\|\vec{x}\|} \cdot n_{\vec{x}}\right)\,d\vec{x}$$

After this I can use:

$$\int_{\mathcal{M}}f(P(\vec{x}))\frac{1}{\|x\|^2}\left(\frac{\vec{x}}{\|\vec{x}\|} \cdot n_{\vec{x}}\right)\,d\vec{x} = \int_{[0,1]^d}f(P(\vec{x}(\vec{t})))\frac{1}{\|x\|^2}\left(\frac{\vec{x}}{\|\vec{x}\|} \cdot n_{\vec{x}}\right)\sqrt{det[g(\vec{t})]}\,d\vec{t} \\ g(\vec{t}) = \left(\frac{\partial \vec{x}}{\partial \vec{t}}(\vec{t})\right)^T\left(\frac{\partial \vec{x}}{\partial \vec{t}}(\vec{t})\right),$$

to compute the integral.

Provided that the above is not wrong, my question is how is the term $\frac{1}{\|x\|^2}\left(\frac{\vec{x}}{\|\vec{x}\|} \cdot n_{\vec{x}}\right)$ derived in the first place, and how would I go about deriving a similar term for other projections (the current one projects on the unit sphere).

Edit: Taking into account Shifrin's comment I realized that I cannot split the two transformations as I have done above. Namely, when I was computing the metric tensor's determinant, I had used a property which doesn't hold for non-square matrices:

$$det[AB] = det[BA] = det[A]det[B],$$

in order to get:

$$det \left[\left(\frac{\partial P}{\partial \vec{t}}\right)^T\left(\frac{\partial P}{\partial \vec{t}}\right)\right] = det \left[\left(\frac{\partial P}{\partial \vec{x}}\frac{\partial \vec{x}}{\partial \vec{t}}\right)^T\left(\frac{\partial P}{\partial \vec{x}}\frac{\partial \vec{x}}{\partial \vec{t}}\right)\right] \\ \ne det \left[\left(\frac{\partial P}{\partial \vec{x}}\right)^T\left(\frac{\partial P}{\partial \vec{x}}\right)\right] det \left[\left(\frac{\partial \vec{x}}{\partial \vec{t}}\right)^T\left(\frac{\partial \vec{x}}{\partial \vec{t}}\right)\right].$$

Nevertheless, it seems like this can be factored in 3D (following the solid angle article in wikipedia) in the following manner instead:

$$\sqrt{det \left[\left(\frac{\partial P}{\partial \vec{t}}\right)^T\left(\frac{\partial P}{\partial \vec{t}}\right)\right]} = \frac{1}{\|\vec{x}\|^2}\left|\vec{n}_{\vec{x}} \cdot \frac{\vec{x}}{\|\vec{x}\|}\right| \sqrt{det\left[\left(\frac{\partial \vec{x}}{\partial \vec{t}}\right)^T\left(\frac{\partial \vec{x}}{\partial \vec{t}}\right)\right]}.$$

I am assuming that after many algebraic transformations the above can be proven at least in 3D. I am not aware whether there's a more general result for $d$ dimensions or how/whether this generalizes to other maps except $P$ (e.g. $Q:\mathcal{M} \rightarrow \mathcal{N}$, $Q$ sufficiently smooth, $\mathcal{N}$ being a different manifold). I have the suspicion that this can be derived somehow from the divergence theorem.

Note that in my comments below my question, $S$ corresponds to $P$ (I cannot edit the comments).

lightxbulb
  • 2,378
  • 2
    You are using $S$ both for the projection mapping and for what I presume is a $(d-1)$-dimensional manifold. This is highly confusing. The Jacobian you are referring to needs to be the Jacobian of the map from the manifold to the sphere, not the map on Euclidean space. – Ted Shifrin Sep 29 '21 at 20:52
  • @TedShifrin I am assuming I have to compute the determinant of $\left(\frac{\partial \vec{S}}{\partial \vec{x}}\frac{\partial \vec{x}}{\partial \vec{t}}\right)^T\left(\frac{\partial \vec{S}}{\partial \vec{x}}\frac{\partial \vec{x}}{\partial \vec{t}}\right)$ instead? Which would supposedly not be zero. My mistake arose from using a property valid for square matrices: $det[A^TB^TBA]=det[A^TA]det[B^TB]$ on non-square matrices (namely $\frac{\partial \vec{x}}{\partial \vec{t}}$). Thank you very much for the comment, feel free to formulate it as an answer so that I could accept it. – lightxbulb Sep 29 '21 at 21:16
  • Considering the above and the term from wikipedia for projecting on the sphere, it seems that if $A = \frac{\partial \vec{x}}{\partial \vec{t}}$ and $B=\frac{\partial \vec{S}}{\partial \vec{x}}$, then I should be able to prove: $det[A^TB^TBA] = \frac{1}{|\vec{x}|^2}\left(\frac{\vec{x}}{|\vec{x}|} \cdot \vec{n}{\vec{x}}\right)det[A^TA]$ (at least in 3D, the unnormalized version of $\vec{n}{\vec{x}}$ is the cross product of the columns in $A$ in that case). – lightxbulb Sep 29 '21 at 21:53
  • Please sort out your notation carefully, and then I'll try to write an answer. One letter for the hypersurface, one letter for the sphere, and $S$ (or maybe $P$) for the radial projection. – Ted Shifrin Sep 29 '21 at 22:13
  • @TedShifrin I have updated the notation. The $d\vec{\omega}$ and $d\vec{x}$ are to be interpreted as the respective area elements at $\vec{\omega}$ and $\vec{x}$, while $d\vec{t} = dt_1\ldots dt_d$. – lightxbulb Sep 29 '21 at 23:45
  • You're still missing the point that the projection maps 4 – Ted Shifrin Sep 30 '21 at 17:17

1 Answers1

1

As I said in the comments, you cannot apply the change of variables theorem unless you look at the mapping from the hypersurface to the sphere. So your "zero determinant" is a complete red herring.

The most powerful way to do such computations is with differential forms. You can see the details of one such computation here. Of course, the normal $\vec n$ to your hypersurface $\mathcal M$ has to appear, since the element of "surface area" on an oriented hypersurface is $$d\vec\sigma = \iota_{\vec n}dt_1\wedge\dots\wedge dt_{d-1} = \sum_{i=1}^{d+1} (-1)^{i-1} n_i dt_1\wedge\dots dt_{i-1}\wedge dt_{i+1}\wedge\dots dt_{d+1}.$$ In particular, the element of "surface area" on the unit sphere is $$d\vec\omega = \sum_{i=1}^{d+1} (-1)^{i-1} t_i dt_1\wedge\dots dt_{i-1}\wedge dt_{i+1}\wedge\dots dt_{d+1}.$$ As the computation in that linked answer shows for $d=2$, solid angle appears here quite naturally — but be careful about the exponent on $r=\|\vec t\|$: If $\pi\colon\Bbb R^{d+1}-\{0\}\to S^d$ is the radial projection, $\pi(\vec t) = \dfrac{\vec t}r$, then the pullback $$\pi^*d\vec\omega = d\vec\Omega = \frac1{r^d}\sum (-1)^{i-1} t_i dt_1\wedge\dots dt_{i-1}\wedge dt_{i+1}\wedge\dots dt_{d+1}.$$

If we now restrict this solid angle form to $\mathcal M$, you want to see how it is related to $d\vec\sigma$. This is quite analogous to the flux computations that appear in multivariable calculus (see, e.g., my YouTube lecture). In particular, to get the flux of $\vec F = (F_1,\dots,F_{d+1})$ across the hypersurface $\mathcal M$, we want $$\int_{\mathcal M} \vec F\cdot\vec n\, d\vec\sigma = \int_{\mathcal M} \sum (-1)^{i-1} F_i\,dt_1\wedge\dots dt_{i-1}\wedge dt_{i+1}\wedge\dots\wedge dt_{d+1}.$$ As I explained in that YouTube lecture, you can see that this magic formula is correct by evaluating on an oriented orthonormal basis for the tangent plane of $\vec M$, noting that putting $\vec n$ at the head of that list gives an oriented orthonormal basis for $\Bbb R^{d+1}$; only the normal component of $\vec F$ appears, since any tangential component gives a $0$ term in the determinant.

In our situation, restricting $d\vec\Omega$ to $\mathcal M$ is finding the flux of $\vec F = \dfrac{\vec t}{r^d}$, which is in turn the integral of $\vec F\cdot\vec n\,d\vec\sigma$, as you desired.

I would urge you to learn the basics of differential form computations and get rid of Gram determinants and the awkward vector computations. :)

Ted Shifrin
  • 125,228
  • Are you suggesting differential forms because the computations would be less tedious with those, or is there something inherently wrong with using the Gram determinant here (after taking into account the correction of using $det[A^TB^TBA]$ instead of $det[B^TB]det[A^TA]$)? Anyways, I will try to read up more on differential forms and specifically practice with some exercises, and I guess maybe even dig up Clifford calculus. – lightxbulb Sep 30 '21 at 18:31
  • 1
    Differential forms are the natural thing one wants to integrate in the first place, NOT functions. The Gram determinant stuff comes from pulling back by a parametrization, but it is not natural to work with parametrizations in this setting. It is much more natural to work with the ambient coordinates and use the unit normal. You might look at my YouTube lectures. – Ted Shifrin Sep 30 '21 at 18:41
  • I am mostly used to working with parametrisations of surfaces or Dirac deltas with implicit functions for representing surfaces. I will take a look at the lectures and try to read more on the topic. – lightxbulb Sep 30 '21 at 19:24
  • Well, you can pull my formulas back by your parametrization(s) any time you want. :) The pullback takes care of the fudge-factor jacobians automatically for you! – Ted Shifrin Sep 30 '21 at 19:31
  • Are the video lectures mentioned more comprehensive than the book you suggest in the other thread? Or is it the opposite? Or are the two complementary? I have studied both linear algebra, analytic geometry, and calculus previously - we just never really studied differential forms in our course, so most of my kowledge on the subject is self-study (a little bit of Clifford algebra, tensor calculus, differential geometry, measure theory), and I haven't done much exercises on those either, so ithasn't been enough to solidify what I have read. – lightxbulb Sep 30 '21 at 23:39
  • 1
    The lectures are based on the book. Examples are different, and there are exercises and often a bit more in the book. It sounds like you just need to see a bit more depth and examples and do some computations. I have one lecture applying Stokes’s Thm in 4D (not in the book). – Ted Shifrin Sep 30 '21 at 23:44