0

In Pavel Grinfeld's tensor calculus course (eg: 18:51) , it is said that in a given coordinate system on $\mathbb{R}^3$, we can write:

$$ e^i = g^{i \alpha} e_{\alpha}$$

Suppose we evaluate the above in Cartesian coordinates, we have the metric tensor as $g^{i\alpha} = \delta^{i\alpha}$, this means:

$$e^i = e_i$$

But this seems paradoxical to me. The left side is something which lies in the co-tangent space/ dual space and the right is something which lives the vector space. So, how can we say they are equal to each other?

  • 4
    Are the $e_\alpha$ supposed to be tangent vectors? It seems that they're referring to the 'coordinates' of a vector instead. That is, you have some tangent vector $X = e_\alpha \partial_\alpha$ and its corresponding covector $X^\flat = e^i dx_i$, and the relationship between coordinates is $e^i = g^{i\alpha} e_\alpha$. – infinitylord May 26 '22 at 17:28
  • @infinitylord yes – Clemens Bartholdy May 26 '22 at 18:47
  • I'm pretty sure they are not taking about coordinates they call it the basis element multipile time – Clemens Bartholdy May 26 '22 at 18:48
  • No, Grinfeld points out that the tensor $\mathbf{e}j$ consisting of the tangent vectors itself _is a covariant (0,1) tensor and can be transformed to its dual (1,0) counter part using the metric. I will give you an example in polar coordinates. – ContraKinta May 26 '22 at 18:52

2 Answers2

3

Obviously an element of the dual can never be equal to an element of the original vector space. The easiest thing is of course to just write things out basis-free. Given a vector space $V$ and a metric tensor on it, i.e a bilinear map $g:V\times V\to\Bbb{R}$ which is symmetric and non-degenerate, we can do the following:

  • for each $v\in V$, we assign a covector $g(v,\cdot)$. This mapping $v\mapsto g(v,\cdot)$ from $V$ into $V^*$ is denoted as $g^{\flat}:V\to V^*$ (this is an isomorphism, because that's the definition of $g$ being non-degenerate).

Now, one way of making sense of the indices is that you can take a basis $\{e_1,\dots, e_n\}$ of $V$, and the dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ of $V^*$ (whose defining property is that for all $i,j\in\{1,\dots, n\}$, $\epsilon^i(e_j)=\delta^i_j$). Now, $v\in V$ can be written as a linear combination of basis vectors \begin{align} v=\sum_{i=1}^nv^ie_i, \end{align} for some unique numbers $v^1,\dots, v^n\in\Bbb{R}$ (actually it's easy to see from the definitions that $v^i=\epsilon^i(v)$, the value of the covector $\epsilon^i$ on the vector $v$). Next, $g^{\flat}(v)\in V^*$ is a covector so we must be able to write it as a linear combination of the $\epsilon$'s, i.e \begin{align} g^{\flat}(v)&=\sum_{i=1}^nc_i\epsilon^i,\tag{$*$} \end{align} for some unique $c_1,\dots, c_n\in\Bbb{R}$. In fact, you can convince yourself that \begin{align} c_j = [g^{\flat}(v)](e_j)=g(v,e_j)=g\left(\sum\limits_{i=1}^nv^ie_i,e_j\right)=\sum_{i=1}^nv^ig(e_i,e_j)\equiv g_{ij}v^i \end{align} (first equality follows by evaluating both sides of $(*)$ on the vector $e_j$, and using the property of the dual basis). Or equivalently (after renaming indices), $v^i=g^{ij}c_j$. Finally, it is tradition to not write the coefficients as $c_j$, but to write them as $v_j$ instead, whicch thus gives the equality $v^i=g^{ij}v_j$. So, $v^1,\dots, v^n\in\Bbb{R}$ are the coefficients when you write the vector $v$ as a linear combination of the basis $\{e_1,\dots, e_n\}$ of $V$, whereas $v_1,\dots, v_n\in\Bbb{R}$ are the coefficients when you write the covector $g^{\flat}(v)$ as a linear combination of the corresponding dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ of $V^*$.


Now, another way of saying this is to use the (I believe Penrose's) abstract index notation (e.g as explained in Wald's GR book). Here, the $(0,2)$ tensor $g$ is written as $g_{\alpha\beta}$. The subscripts $\alpha,\beta$ do not indicate the componenets with respect to a basis, but simply that $g$ is an object which has two slots where you can feed it elements of $V$. An element of $V$ is written as $v^{\alpha}$ (instead of $v\in V$). In this notation, the symbol $g_{\alpha\beta}\,v^{\alpha}$ stands for $g(v,\cdot)$. The repeated indices in the up-down manner doesn't refer to summation of components, but rather tensor contraction, so $v^{\alpha}\in V, g_{\alpha\beta}\in T^0_2(V), g_{\alpha\beta}\,v^{\alpha}\in V^*$ etc. But then again, this doesn't seem to be what the lecturer is referring to.


So, what is going on in that lecture is that one is literally fixing a basis $\{e_1,\dots, e_n\}$ of $V$, then considering the metric tensor components $g_{ij}=g(e_i,e_j)$ for $i,j\in\{1,\dots, n\}$ and defining a new set of basis elements of $V$ as $e^i=g^{ij}e_j$, so that $\{e^1,\dots, e^n\}$ is still a basis of $V$ in that notation. To me this is extremely confusing and unnatural. For the sake of avoiding dual spaces, everything is being squished down into the original vector space (it's like writing, drawing, painting all on a single piece of paper; very messy and cluttered).

In an abstract manner what has happened here is that using a basis $\{e_1,\dots, e_n\}$ of $V$, one can obviously consider the dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ as I mentioned above. With this, we can construct a linear map $\psi:V\to V^*$ defined by $\psi(e_i)=\epsilon^i$ for all $i\in\{1,\dots, n\}$, and extending linearly (this $\psi$ is an isomorphism because it sends a basis to a basis). So, we now have two isomorphisms between $V$ and $V^*$, namely $g^{\flat}:V\to V^*$ (whose inverse is denoted $g^{\sharp}:V^*\to V$) and $\psi:V\to V^*$. What is happening in this lecture is that these are being composed to yield $g^{\sharp}\circ \psi:V\to V$. So, the value of this isomorphism on the vector $e_i$ is: \begin{align} (g^{\sharp}\circ\psi)(e_i)=g^{\sharp}(\epsilon^i)= g^{ij}e_j \equiv e^i \end{align} (The penultimate equality is because $g^{\flat}(e_j)=g(e_j,\cdot)=g_{ji}\epsilon^i$, so by applying $g^{\sharp}$ to both sides we get $e_j=g^{\sharp}(g_{ji}\epsilon^i)=g_{ji}g^{\sharp}(\epsilon^i)$, and hence juggling the indices we get $g^{\sharp}(\epsilon^i)=g^{ij}e_j$). So, $g^{\sharp}\circ \psi$ is the isomorphism $V\to V$ which sends for each $i\in\{1,\dots, n\}$ $e_i\in V$ to $e^i=g^{ij}e_j\in V$.

If you start out with an orthonormal basis $\{e_1,\dots, e_n\}$ of $V$, then indeed this procedure will spit out $e^i=e_i$ for all $i\in\{1,\dots, n\}$, the equality being that of elements of $V$ (assuming $g$ is positive-definite).

peek-a-boo
  • 65,833
  • This seems to be the right answer but I am utteraly confused how you got $\Phi$ and $g^{#}$ l.. idk maybe I should read it again slowly – Clemens Bartholdy May 28 '22 at 08:26
  • @Aplateofmomos what do you mean by 'how I got $\psi$ and $g^{\sharp}$'? I gave you their precise definitions. – peek-a-boo May 28 '22 at 08:27
  • So what I got from it is you directly you associate $e_i$ to an element in the dual basis and then push it back to vector space through the metric tensor... – Clemens Bartholdy May 28 '22 at 08:32
  • @Aplateofmomos yes, that's right. That's what's happening in the lectures. Although there he just shortcut the procedure by not telling you about $\psi$ and $g^{\sharp}$ individually, but just told you about $g^{\sharp}\circ \psi$ only. – peek-a-boo May 28 '22 at 08:33
  • Why does no book talk about the $\Phi$ map..? – Clemens Bartholdy May 28 '22 at 08:34
  • @Aplateofmomos they may not give it an explicit letter like $\psi$, but I'm sure EVERY good linear algebra text talks about how given a basis for $V$ you get an associated basis for $V^$ (called the dual-basis). And likewise how if you have two vector spaces $V,W$ of same finite dimension, and if you have a basis for $V$ and a basis for $W$, how that gives you an isomorphism $V\to W$. In this case I'm applying the logic with $V$ and its basis ${e_1,\dots, e_n}$, and $W=V^$ with the dual basis ${\epsilon^1,\dots,\epsilon^n}$. This is standard material. – peek-a-boo May 28 '22 at 08:36
  • I got curious about Grindfelds actual setup and according to the lecture he defines the covariant metric tensor $Z_{ij}$as the pairwise dot products of the covariant basis vectors $\mathbf{Z}i \cdot \mathbf{Z}_j$. So, as I suspected, this is a definition based on the scalar (dot) product. The contravariant metric tensor $Z^{gl}$ is then given as the matrix inverse of $Z{ij}$, i.e. $Z^{ij}Z_{ik}=\delta^j_k$. – ContraKinta May 28 '22 at 09:42
  • This setup is pretty "standard" but only works in an Euclidean space. It can easily be extended to a more general setup if you identify your vector space with the tangent space $T_p$(M) at some fixed point $p$ of an underlying manifold $M$ and making some small adjustments. But there is no need for a dual space in the actual setup he is referring to. – ContraKinta May 28 '22 at 09:42
  • @ContraKinta sure he doesn't reference the dual space (as you, me and Jackozee Hakkiuz have said), though the natural basis-free way is almost begging us to involve the dual space via $g^{\flat}:V\to V^*$ (the Riesz/Muscial isomorphisms), which is why I find it odd that the lecturer avoids them so much. – peek-a-boo May 28 '22 at 09:51
  • I have a confusion here tho is the dual of $e_i$ neccesarily $\epsilon_i$? o_o – Clemens Bartholdy May 30 '22 at 22:11
  • Can you point me out to a book which talks about this $\phi$ map in detail? I can't seem to find this topic discussed directly. Maybe it is due to names. I get the one of the map is through Riesz's representation theorem , I guess that was lifting the vector as a covector – Clemens Bartholdy May 30 '22 at 22:13
  • not a book but ig this https://math.stackexchange.com/questions/105490/isomorphisms-between-a-finite-dimensional-vector-space-and-its-dual – Clemens Bartholdy May 30 '22 at 22:39
  • @Aplateofmomos I learnt all my linear algebra from Friedberg, Insel, Spence (4th editiion). And yes, I defined ${\epsilon^1,\dots, \epsilon^n}$ to be the dual basis of ${e_1,\dots, e_n}$ (if you have a basis for $V$, there always exists a unique basis for $V^*$ which has the 'duality property', i.e evaluation of covectors on vector is $\delta^i_j$). Finally, like I said above, if you have a basis for one vector space, and a basis for another vector space (both bases of same finite size), you can construct an isomorphism. – peek-a-boo May 30 '22 at 23:15
  • 1
    "one is literally fixing a basis ${e_1,\dots, e_n}$ of $V$, then considering the metric tensor components $g_{ij}=g(e_i,e_j)$ for $i,j\in{1,\dots, n}$ and defining a new set of basis elements of $V$ as $e^i=g^{ij}e_j$, so that ${e^1,\dots, e^n}$ is still a basis of $V$ in that notation." The fact that in Grinfeld $\bf Z^i$ is mutually orthogonal to $\bf Z_j$ if $i\neq j$ doesn't make them different spaces - both sets are in the $T_P \cal M.$ Is that the issue in simple terms? – Antoni Parellada Jun 01 '22 at 15:51
  • 1
    @JAP yes (and orthogonality is a term reserved for vectors in the same space) – peek-a-boo Jun 01 '22 at 18:23
1

It is perfectly valid to use the "usual" dot product as pairing between two instances of $\mathbb{R}^2$ and have one of them act as a "dual space" because isomorphism. This is the usual setup for tensor algebra in Euclidean geometry). Using polar coordinates

$$\bar{x}=r\cos(\theta)$$ $$\bar{y}=r\sin(\theta)$$

The basis vectors $e_j=\frac{\partial \mathbf{r}}{\partial x^j}$ are $e_r=\hat{r},\, e_\theta=r\hat{\theta}$.

Using the pairing $e_j\cdot e^k=\delta_j^k$ we also have the "dual basis" $e^r=\hat{r},\,e^\theta=\frac1r \hat{\theta}$

The metric is $$g_{jk}=\left( \begin{array}{cc} 1 & 0 \\ 0 & r^2 \end{array} \right)\quad \left(=\mathbf{e}_j\cdot \mathbf{e}_k\right)$$ With the inverse

$$g^{hl}=\left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{1}{r^2} \end{array} \right)\quad \left(=\mathbf{e}^h\cdot \mathbf{e}^l\right)$$

Now, if we put the basis itself in a vector $$\mathbf{e}_j=\left( \begin{array}{cc} \hat{r} \\ r\hat{\theta} \end{array} \right)$$ and "raise" the index, we will end up with the "dual basis" $$g^{hj}\mathbf{e}_j=\left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{1}{r^2} \end{array} \right)\left( \begin{array}{cc} \hat{r} \\ r\hat{\theta} \end{array} \right)=\left( \begin{array}{cc} \hat{r} \\ \frac1r\hat{\theta} \end{array} \right)=\mathbf{e}^h$$

Notice that the object $\mathbf{e}_k$ is a (0,1)-tensor consisting of vectors. Notice also that

$$\mathbf{e}_r=\hat{r}$$ $$\mathbf{e}^r=\hat{r}$$

Are they the "same"? It depends on your viewpoint, context and setup. But there is a one to one correspondence between the the collection of basis vectors $\mathbf{e}_j$ and the collection of basis vectors $\mathbf{e}^k$ through the metric.

If you want a more in depth understanding of how this relates to manifolds and dual spaces/linear forms I recommend my own post here-> So the basis elements for the (contravariant) tangent space itself transform like covariant vectors!

ContraKinta
  • 1,622
  • Thanks bro/ sis Lemme go through your answer carefully. – Clemens Bartholdy May 26 '22 at 21:27
  • I get what you are saying but I sort of knew this part, what I am asking is , these are two different things which live in different set, so how can we set them equal.. what does this exact equality mean? I do not believe it is a normal equality that we usually use – Clemens Bartholdy May 26 '22 at 21:42
  • The idea writing it in component of normal basis all is clear and simple to me.. this one conceptual issue is the only thing unclear. – Clemens Bartholdy May 26 '22 at 21:43
  • 2
    To be fair to professor Grinfeld I don't think he actually uses a setup with tangent spaces and duals. Instead all vectors are members of the same Euclidean space at a specific point. In a more general setting the metric tensor is used to establish a one-to-one correspondence between two different vector spaces. Anyway, $e^i=e_jg^{ij}$ is a valid expression in tensor notation. $e^i=e_i$ is invalid (unbalanced indices) and should never be encountered. – ContraKinta May 26 '22 at 22:34
  • Maybe the source of your confusion is the expression $\delta^{ij}e_j=e^i$? It should not to be confused with the index renaming $\delta_i^je_j=e_i$! :) – ContraKinta May 26 '22 at 22:46
  • What do you mean it should not be confused with index renaming idts I understand :( – Clemens Bartholdy May 27 '22 at 22:17
  • I agree with ContraKinta. Pavel doesn't use the cotangent space, he uses Riesz representatives (without mention) to do everything in the tangent space only. – Jackozee Hakkiuz May 28 '22 at 06:45