1

Let $(M, g)$ be a Riemannian manifold of dimension $n \geq 4$, and let $W$ be the $(0, 4)$ Weyl tensor on $M$. Let $\{e_i : 1 \leq i \leq n\}$ be an orthonormal frame on $M$, that is, a family of smooth vector fields $e_i: U \to T M$, with $U \subseteq M$ being open and nonempty, that form an orthonormal basis of the tangent space at each point in $U$. Furthermore, let $A(t)$ be an arbitrary family of orthogonal transformations on the tangent space $T M$ smoothly dependent on $t \in \mathbb R$ with the additional property that for $t = 0$, they become the identity function on $T M$.

Let's assume we have

$$W(A(t) e_1, A(t) e_2, A(t) e_3, A(t) e_4) = 0 \tag{1}$$

for all $t \in \mathbb R$. I now want to calculate the derivative of this equation with respect to $t$ at $t = 0$. More precisely, I want to derive the equation

$$W(B e_1, e_2, e_3, e_4) + W(e_1, B e_2, e_3, e_4) + B(e_1, e_2, B e_3, e_4) + W(e_1, e_2, e_3, B e_4) = 0 \tag{2}$$

where $B$ is a skew-symmetric matrix. Furthermore, I want to show that for each skew-symmetric B, there exists an orthogonal matrix A with such a property.

I'm honestly a bit lost here on how to approach this. I'm guessing that at some point, the chain rule for differentiation comes to play, and arriving at a skew-symmetric matrix in the tensor components also seems plausible as the derivative of the transformation $A(t)$ (represented as a matrix) always is $B(t) A(t)$ for some skew-symmetric matrix function $B(t)$ (source), and since we're differentiating in $t = 0$ where $A(t)$ is the identity, we have $A'(0) = B(0) = B$ for some skew-symmetric matrix $B$.

(Although I'm not really sure how we can show that each such skew-symmetric matrix $B$ is already the derivative of an orthogonal $A(t)$ with $A(0) = I$?)

But my main problem is that I don't know what I can do with the Weyl-tensor, and how exactly I can differentiate it here. I've searched the web a bit for tensor derivatives, and the closest I could find was the wiki article about covariant derivates for tensor fields where the formula remotely looks like the sum I'm supposed to get, but my main problem here is that I'm supposed to differentiate with respect to a real variable that's somehow within this tensor expression, so I have no idea if it's the covariant derivative or something else that comes into play here, and if should be the covariant derivative, then how do I get there from differentiating a function with a real-valued parameter $t$ as input.

moran
  • 3,147

1 Answers1

1

Since you're differentiating only with respect to time, we can forget about the spatial dependence and just compute pointwise. That is, fix a point $p \in M$ and evaluate everything at this point, so that $e_i$ becomes an orthonormal basis for the single vector space $T_p M$, $A(t)$ is a family of orthogonal transformations of $T_pM$ and $W : (T_p M)^4 \to \mathbb R$ is a $4$-tensor on the vector space $T_p M$. Thus all the derivatives are simply of functions valued in fixed vector spaces, so we don't need to get covariant derivatives involved.

The key property allowing this product-rule-like expansion is the multilinearity of $W$. The easy way of doing this calculation is using coordinates/components so that we can directly apply the elementary product rule: using the summation convention we can write $$W(Ae_1, Ae_2, Ae_3, Ae_4) = W_{ijkl} (Ae_1)^i (Ae_2)^j (Ae_3)^k (Ae_4)^l.$$ Since the RHS here is literally a sum of products of real-valued functions of $t$, we can apply the product rule to differentiate it. Since $W,e_i$ do not depend on $t$ this yields $$\begin{multline}\frac d{dt}\Big|_{t=0} W(A e_1, \ldots, A e_4)=W_{ijkl}(A' e_1)^i(Ae_2)^j(Ae_3)^k(Ae_4)^l + \cdots \\+ W_{ijkl}(A e_1)^i(Ae_2)^j(Ae_3)^k(A'e_4)^l\end{multline}$$ where $'$ denotes $d/dt$. (Hiding in here we used the fact that e.g. $(Ae_1)' = A'e_1$ for a fixed vector $e_1$: once again, this becomes clear if you write it in components as $(Ae_1)^i = A^i_a e_1^a$.) Since $A(0)$ is the identity, letting $B = A'(0)$ we can write this as $$W_{ijkl}(Be_1)^ie_2^je_3^ke_4^l + \ldots + W_{ijkl}e_1^i e_2^je_3^k(Be_4)^l,$$which we recognize as summation notation for $W(Be_1,e_2,e_3,e_4) + \ldots W(e_1,e_2,e_3,Be_4)$ as desired.

You could do this calculation without using components by using the limit definition of the derivative and following the usual proof of the product rule - the multilinearity of $W$ will allow you to emulate the "splitting" trick $$(fg)(t) - (fg)(0) = f(t)(g(t) - g(0)) + g(0)(f(t) - f(0)).$$

As to the correspondence between orthogonal families $A$ and skew-symmetric generators $B$, the key is the matrix exponential: given a skew-symmetric matrix $B$, you should be able to prove that the family $A(t) = e^{tB}$ is orthogonal and satisfies the desired properties at $t=0$. This is an example of the Lie group-Lie algebra correspondence, here between the group of rotations $SO(n)$ and the algebra of infinitesimal rotations $\mathfrak{so}(n)$.

  • Thank you! So if I understand this summation convention correctly, then $W_{ijkl} (Ae_1)^i (Ae_2)^j (Ae_3)^k (Ae_4)^l$ means $\sum_{1 \leq i, j, i, l \leq n} W_{ijkl} (Ae_1)^i (Ae_2)^j (Ae_3)^k (Ae_4)^l$, where each $W_{ijkl}$ is just the $ijkl$-component of the tensor written as a matrix with respect to the basis/coordinates $e_i$, and we multiply that by $ (Ae_1)^i (Ae_2)^j (Ae_3)^k (Ae_4)^l$? And in each factor $(A e_1)^i$, the index $i$ means that we take the $i$-th row of the respective vector? – moran Sep 01 '17 at 18:09
  • Also, is there any reason why we write all four indices at the bottom of $W_{ijkl}$? Because with some other tensors, I sometimes see some indices at the top and some at the bottom the tensor, i.e. when we write $R_{ac}$ for the Ricci tensor (both at the bottom) or $R^b_{a b c}$ for the curvature tensor (one at the top, three at the bottom). Can be basically choose as we see fit how many we write at the top and at the bottom, or are there specific rules/reasons for each tensor for where these indices need to go? I apologize if the answer for this is trivial, it has just confused me sometimes. – moran Sep 01 '17 at 18:14
  • For your first comment, yes, this is essentially right; though the components are not necessarily taken with respect to the basis $e_i$ - this calculation is valid in any basis. For the second, I was matching your convention of $W$ acting on four tangent vectors, which corresponds to having four lower indices. In Riemannian geometry we can freely raise and lower indices using the metric. – Anthony Carapetis Sep 02 '17 at 01:01