The question that I will eventually ask, has it’s origin in my starting to try to understand the ‘Lie Derivative’ ( LD ) in a ‘General Relativistic Space-Time’, still a work in progress!
After reading material, I went back to trying to understand the idea of the ‘Directional Derivative’ (DD), of a scalar function, $\phi$, in flat three dimensional Euclidean space.
In flat space, my intuition tells me, we should be able to generalise from the DD, $\frac{ d\phi } {d| \underline{s} | }$, where $\underline{s}$ is associated with a displacement in some particular direction in space, to a DD of a vector $\underline{v}$ in some given direction $\frac{ d \underline{v}} {d| \underline{s} | }$. See Note 1.
A vector function, $\underline{v}$, is just, in 3D, an ordered 3-tuple of scalar functions.
I can’t see anything, for a three dimensional Euclidean space that would stop this from being OK. See Note 2.
Now, if we have instead a Riemannian space to deal with, my reading of materials tells me that, in general the tangent spaces at two different points, $P$ and $Q$ are different vector spaces, even in a three-dimensional manifold, where I think the tangent spaces are also three dimensional.
There is a basic problem with adding vectors at different points of our Riemann space, and hence taking limits of expressions that would involve such additions. So we cannot form, in a straightforward way, a DD of a vector in such spaces.
Can this be thought of as being, because the angle between vectors in the tangent spaces at points $P$ and $Q$ in a Riemannian space, cannot be defined?
My actual question is: Does the angle between vectors, defined at different points in a Riemannian space, exist?
Other Information
Note1:
I can’t remember how I first found the idea that the ‘Lie Derivative’ of a vector field in the direction of some vector field, was a generalisation of the ‘Directional Derivative’ of a scalar function, but see below, where it says
the lie derivative reduces to the standard directional derivative
https://en.wikipedia.org/wiki/Directional_derivative#The_Lie_derivative
Also, see,
https://en.wikipedia.org/wiki/Lie_derivative#The_Lie_derivative_of_a_vector_field
which mentions the directional derivative, with respect to vectors,
denote the operations of taking the directional derivatives with respect to X and Y, respectively.
Dated 04 th Sept 2022.
To get some idea of what the lie derivative is, perhaps a look at
Why do we need a Lie derivative of a vector field? is in order?
Note in particular
Then the Lie derivative is exactly the rate of change of that whole process; so it can be summed up as the rate of change of deformation of a tiny vector which is under the influence of the flow.
Note 2:
I no longer think the situation is as described in my question! A more detailed look at what goes on in Euclidean space is required, dated 18th Aug 2022.
I think it depends upon your definition of what a vector is.
If you view a vector just as a ‘Directed Line Segment’ in space, then I think what I say about Euclidean space is correct ( I should analyse this ).
However if you view a vector as defined by how it’s components are related, in different coordinate systems, then you cannot add vectors defined at two different points, not even in Euclidean space, because their transformations properties will , in general, be different, at different points, and they will belong in different vector spaces.
In More Detail
If a vector $\underline{A}$, of, say 'Type I', is defined via the relation \begin{equation*} \tilde{A}^i= \frac { \partial \tilde{x}^i }{ \partial x^j } A^j \end{equation*}
To obtain a definite set of components, in the {${\tilde{x}^i}$} coordinate system we need a definite set of components, the $A^j$, in the {$x^j$} coordinate system and a definite set of numbers, the $\frac { \partial \tilde{x}^i }{ \partial x^j }$.
To be precise, if our vector is to be considered a vector at some point $P= (x^1_P,x^2_P,x^3_P)$, we could write \begin{equation*} \tilde{A}^i= \left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P A^j \tag{1} \end{equation*}
Each $\tilde{A}^i$ is a certain 'Linear Combination' of the {$A^j$}.
What we are saying, is that for any possible vector of Type I, it's components ($\tilde{A}^i$} are related to it's components {$A^j$} by a relation of the form (1).
Some other vector, say $\underline{B}$, of Type I, has it's components satisfy \begin{equation*} \tilde{B}^i= \left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P B^j \tag{2} \end{equation*}
The idea of adding, $\underline{A}$ and $\underline{B}$ is well defined, we have \begin{align*} \tilde{A}^i+\tilde{B}^i&= \left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P A^j +\left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P B^j \\ &=\left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P (A^j+B^j) \tag{3} \end{align*}
So the vector $\underline{S}=\underline{A}+\underline{B}$, exists, such that it's components satisfy an equation of the form (1), \begin{equation*} \tilde{S}^i= \left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_P S^j \tag{4} \end{equation*} Consider now a point $Q=(x^1_Q,x^2_Q,x^3_Q)$
Some vector at $Q$, say $\underline{C}$, has it's components satisfy \begin{equation*} \tilde{C}^i= \left. \frac { \partial \tilde{x}^i }{ \partial x^j }\right|_Q C^j \tag{5} \end{equation*} Note this is NOT of the form (1), hence $\underline{C}$ is not a Type I vector. It belongs in some other vector space.
Note: 3
Please note the following quoted from one of Deane’s comments
The discussion under In More Detail is completely correct, but note that the Riemannian metric is not used at all. It therefore is a correct proof that there is no natural way to define an isomorphism between the tangent spaces at two different points using only the manifold structure.