10

Given a multivector, what is the easiest way to compute its inverse? To take a concrete example, consider a bivector $ B = e_1(e_2 + e_3) $. To compute $ B^{-1} $, I can use the dual of $ B $: $$ B = e_1e_2e_3e_3 + e_1e_2e_2e_3 = I(e_3-e_2) = Ib $$ $$ BB^{-1} = 1 = Ib B^{-1} $$ $$ B^{-1} = -b^{-1}I = -\frac{b}{b^2}I$$ But this won't work for a bivector in 4-dimensions for example. Is there a more general/easier way?

Raskolnikov
  • 16,333

7 Answers7

9

Not sure where you got the idea that inverses should involve duality. Usually this is done merely through reversion. Let $B^\dagger$ denote the reverse of $B$. Then the inverse is

$$B^{-1} = \frac{B^\dagger}{B B^\dagger}$$

For a bivector, $B^\dagger = -B$. I believe this works for any object that can be written as a geometric product of vectors (i.e. that can be factored; which is why it works for rotors and spinors), but don't quote me on that. Of course in mixed signature spaces, anything that has a null factor is not invertible.

Muphrid
  • 20,661
  • 1
    what does it mean to have a null factor? – user997712 Jul 14 '13 at 21:36
  • 1
    That there is a null vector as one of the factors of a geometric product. Consider a mixed signature space with a basis vector $e_0$ such that $e_0 e_0 = -1$. A vector of the form $u = e_0 \pm e_1$ would be null, in the sense that $uu = 0$. Clearly, $u$'s inverse is no longer a multiple of itself. Indeed, it has no inverse. – Muphrid Jul 14 '13 at 21:43
  • Can a vector $ e $ exist for which $ ee = -1 $? Do you mean to say a blade which is in the basis? – user997712 Jul 14 '13 at 21:52
  • In a purely Euclidean space, no, there are no such vectors. Spaces in which these problems can arise are fundamentally different from Euclidean space (see, for example, Minkowski space and the associated GA on that space, which is called the spacetime algebra, or STA). – Muphrid Jul 14 '13 at 22:01
  • Will $B B^\dagger$ always result in an scalar? – HelloGoodbye Feb 25 '24 at 17:01
  • What does it mean to divide bivectors? How can this be computed? – Aaron Franke Sep 20 '24 at 23:19
  • In the case of 4 dimensions, $B B^\dagger$ (multiplying B by the reverse of B) results in more than just scalar components, it also results in vector and pseudoscalar components, of the form $1 + x + y + z + w + xyzw$. I don't know how it's possible to divide $B^\dagger$ by this non-scalar value. – Aaron Franke Sep 21 '24 at 02:03
7

Given multivector $a$ that you want to invert, the function $x \mapsto a x$ is a linear transformation (of the algebra viewed as a $2^n$ dimensional vector space), right?

So, an uninspired but reliable way to find $a^{-1}$ would be to express $a$ as a $2^n$ by $2^n$ matrix $A$ (that is, the matrix $A$ whose $2^n$ columns are $a$ times each of the $2^n$ basis multivectors), and solve the linear equation $A x = 1$; then the solution $x$ is the desired $a^{-1}$. If there is no solution, then $A$ doesn't have an inverse, which means $a$ doesn't have an inverse.

Note that this always gives the answer if there is one, even in cases where the $a^†/({aa}^†)$ method fails due to ${aa}^†$ not being a scalar (where $a^†$ denotes the reverse of $a$). For example, if $a=2+e_1$, then $a^{-1}=(2-e_1)/3$.

Don Hatch
  • 1,277
5

An algorithm to calculate the inverse of a general multivector:

Start with an invertible general multivector (X) of Clifford's geometric algebra over a space of orthogonal unit vectors. Post-multiply repeatedly by a "suitable" Clifford number until a scalar (S) is reached; let the product of the post-multipliers be (R).

Then we have (X)(R) = (S)

Pre-multiply both sides by the required inverse (I) and divide by the scalar (S) and we have:

(R)/(S) = (I) which was to be determined.

For a "suitable" general multivector or Clifford number we try the Reverse or the Clifford conjugate. I notice that (X)(Xrev), for instance, results in only grades invariant to reversion; perhaps this would have been obvious beforehand to a mathematician. This elementary process works up to dimension 5, but fails at dimension 6. I have since seen 2 or 3 papers on the web which seem to agree with this result - but no one comments on it. Above dimension 5 it seems something more sophisticated is needed.

An example in dimension 5: (A)(Arev) = (B) gives grades B0 + B1 + B4 + B5.

B0 is the scalar and B1 is a vector; B4, B5 are the 4-vector and the pseudo-scalar.

In dimension 5 the pseudo-scalar commutes with all vectors and squares to +1 ; as a result we can use duality to re-arrange (B) as a paravector with coefficients in the duplex numbers (also known as hyperbolic, perplex or Study numbers) - that is as D0 + D1

Multiply by D0 - D1 to reach a duplex number which is readily reduced to a scalar.

For dimension 6 and above I found the following "in principle" process - but it doesn't look to me like an efficient one:

In dim 6 (and above) arrange (X) as A+Bn where (A) and (B) are in dim 5 and n is one of the unit vectors (e6) for instance. Post multiply by C+Dn so as to remove e6 in the result. This can be done by something looking rather like a projection operator as discussed by Bouma. Repeat the process to step down the dimensions. I don't see why this shouldn't be extended to as high a dimension as required.

  • 1
    It doesn't quite work in dimension 4: if you multiply by reverse, it zeros out grades 2,3 so you get scalar plus vector plus antiscalar $s+v+S$; or if you multiply by clifford conjugate, it zeros out grades 1,2 so you get scalar plus antivector plus antiscalar $s+V+S$. Neither of those make any further progress by multiplying by either "suitable" multivector, i.e. its reverse or its clifford conjugate. But you can fix that by adding a third kind of "suitable" multivector, that is, the multivector obtained by negating just the scalar part. That is, $-s+v+S$ or $-s+V+S$ respectively. – Don Hatch Nov 12 '18 at 18:14
  • @DonHatch Can you elaborate further? Where are you getting those values? From $B$, or from $B^\dagger$, or from their product? Where are you inserting the values into the multiplication? – Aaron Franke Sep 21 '24 at 02:06
  • Hi @AaronFranke I'm not sure what you're asking. I'm saying that if you start with B in 4 dimensions, then the procedure given in this answer leads to B*rev(B) which is of the form scalar+vector+antiscalar s+v+S (i.e. has grades 0,1,4) or B*cliffordconj(B) which is of the form scalar+antivector+antiscalar s+V+S (i.e. has grades 0,3,4), and repeating the procedure on either of those two results doesn't make any further progress (i.e. doesn't zero out any more grades). But if you arrive at s+v+S then you can zero out more grades by multiplying by -s+v+S (which is neither reverse nor c.c). – Don Hatch Sep 21 '24 at 06:30
  • @AaronFranke did that help? I haven't played with this in a while and haven't re-confirmed my claims about which grades are zeroed out by which of these multiplications recently. But if you think you are getting a different result, I can re-check. – Don Hatch Sep 21 '24 at 06:32
  • @DonHatch If I understand correctly what you are saying is that we can take the result of B*rev(B) and almost multiply it by itself except swapping the sign of s, so it becomes B * rev(B) * swap_sign_of_s(B * rev(B)), which does indeed yield a scalar. This is what my guess was after reading your comment and I tried this, but when I divided rev(B) by that scalar, the result I got wasn't the inverse, thus me wondering if I am doing something wrong. – Aaron Franke Sep 21 '24 at 07:18
  • 1
    Ok, so you've confirmed that B * rev(B) * swap_sign_of_s(B * rev(B)) = (a scalar) . Good, so I don't have to confirm it :-) Then dividing both sides by (that scalar) on the right yields B * rev(B) * swap_sign_of_s(B * rev(B)) / (that scalar) = 1, which means rev(B) * swap_sign_of_s(B * rev(B)) / (that scalar) is the inverse of B, right? – Don Hatch Sep 21 '24 at 07:32
  • @DonHatch Ah that's it, thank you! I missed that last part that I should re-read the equation as B * thing = 1 where thing is the inverse. Everything works now :) – Aaron Franke Sep 22 '24 at 11:34
4

This question is fully answered by this paper. The authors do not give a uniform, sleek formula that works for all dimensions. Instead, there are separate formulas for each of the small dimensions, and then you can use certain isomorphisms between the Clifford algebras to handle higher dimensions.

Eckhard Hitzer, Stephen Sangwine, Multivector and multivector matrix inverses in real Clifford algebras, Applied Mathematics and Computation, Volume 311, 2017, Pages 375-389, ISSN 0096-3003, https://doi.org/10.1016/j.amc.2017.05.027.

First, let us define several involutions of the $n$-dimensional geometric algebra, with signature $(p,q)$, where $p+q=n$. Every multivector $M$ can be written as $$ M=M_0+M_1+\dots+M_n, $$ where each $M_k$ is a linear combination of $k$-blades. Define the hat involution via $$ \widehat M= \sum_{k=0}^n (-1)^k M_k $$ Then, let $M^\dagger$ denote the reversion of $M$, where each wedge of vectors comprising $M$ is written in reverse order. Equivalently, $$ M^\dagger =\sum_{k=0}^n (-1)^{k(k-1)/2}M_k $$ Finally, we can save some notation by defining the Clifford conjugate of $M$, $\overline M$, to be the composition of the two previous involutions. Explicitly, $$ \overline M= \widehat{M^\dagger}= (\widehat M)^\dagger =\sum_{k=0}^n (-1)^{k(k+1)/2} M_k $$ Finally, for any integers $j,k$ for which $1\le j<k\le n$, let $m_{j,k}$ denote the involution which negates the $j$-dimensional and $k$-dimensional parts of $M$ only. That is, $$ m_{j,k}(M)=M-2M_j-2M_k $$

Dimensions One and Two

For small dimensions, it turns out that $M\overline M$ is always a scalar. This implies that $$ M^{-1}=\frac{\overline M}{M\overline M} $$ In dimension $1$, reversion is trivial, which implies by definition $\overline M = \widehat M$, so this is equivalent to $M^{-1}=\widehat M/(M\widehat M)$.

Three dimensions

Here, it turns out $M\overline M\widehat M M^\dagger$ is always a scalar, so $$ M^{-1}=\frac{\overline M\widehat M M^\dagger}{M\overline M\widehat M M^\dagger} $$

Four dimensions

Here is where things start to get less elegant.

$$ M^{-1}=\frac{\overline M m_{3,4}(M\overline M)}{M\overline M m_{3,4}(M\overline M)} $$

Five dimensions

The inverse in five dimensions is calculated as follows:

$$ M^{-1}=\frac{\overline M\widehat M M^\dagger m_{1,4}(M\overline M\widehat M M^\dagger )}{M\overline M\widehat M M^\dagger m_{1,4}(M\overline M\widehat M M^\dagger )} $$

The paper mentions how to compute the inverse in arbitrarily high dimensions, using certain isormorphisms between the various Clifford algebras. You will need to refer to the paper for the details.

Don Hatch
  • 1,277
Mike Earnest
  • 84,902
  • 1
    Thanks for making this answer self-contained (up to five dimensions!). There is just one step that seems to be missing: each of the formulas depends on a statement of the form "Here, it turns out that [...] is a scalar... so the inverse is (something)/(that scalar)". That will work only if that scalar is nonzero. Are we guaranteed that that scalar is nonzero if M is invertible? – Don Hatch Sep 21 '24 at 09:02
  • Whenever $M$ is invertible, the scalar denominator is nonzero. Contrapositively, if the denominator is zero, then $M$ must be a zero divisor (as $M$ is a factor of the denominator in all cases), and it is a basic fact that zero divisors are not invertible. – Mike Earnest Sep 21 '24 at 16:27
  • 1
    Thanks. That takes some additional thinking (in general, the fact that M*something is zero doesn't necessarily imply M is a zero divisor, but it does in these cases due to the form of the something). I'm convinced. – Don Hatch Sep 21 '24 at 20:48
  • This is actually a bit maddening. I have been trying to write some code that will always return either the inverse of M, or a right zero divisor which serves as a certificate that M is a left zero divisor (and therefore non-invertible), based on your formulas above. But in the 4 dimensional case, it can happen that the denominator is zero without any evidence in sight that M is a left zero divisor. E.g. consider the case M=e1+e234. In this case the denominator M*something is indeed zero. But something is zero too, so that's not immediate evidence. – Don Hatch Sep 25 '24 at 05:10
  • 1
    In that case, the fact that M is non-invertible can be seen from the fact that M is a right zero divisor, rather than a left zero divisor... which follows from the fact that M bar is a left zero divisor (since it happens that m34(M Mbar) != 0 in this case) and bar (clifford conjugation) is an antiautomorphism. But that doesn't satisfy the simple contract of the simple (I was hoping) procedure I'm trying to write. I can probably find the desired certificate by using an alternative formula that is a mirror image, in a sense, of the one you gave, but this is getting annoyingly complicated. – Don Hatch Sep 25 '24 at 05:28
  • @DonHatch I retract my earlier comment; I have no idea how to find the zero-complement of $M$ in the case that the denominator is zero. – Mike Earnest Sep 25 '24 at 16:21
  • I googled "geometric algebra is a non-invertible element always a left zero divisor" and it gave me an AI answer "In geometric algebra, a non-invertible element is not necessarily always a left zero divisor". That may be am AI hallucination, though :-) Still looking for a reliable reference to confirm or deny this. – Don Hatch Sep 26 '24 at 06:56
  • I asked a separate question about it: https://math.stackexchange.com/questions/4977079/is-every-non-invertible-multivector-a-left-zero-divisor – Don Hatch Sep 27 '24 at 08:33
1

If your multivector is a product of vectors ($x = x_1x_2\cdots{}x_n$) then the invers is given by

\begin{align} x^{-1} & = \left(x_1x_2\cdots{}x_n\right)^{-1}\\ & = x_n^{-1}\cdots{}x_1^{-1}. \end{align}

Since none null vectors are always invertible ($a^{-1} = \frac{a}{\Vert{}a\Vert^2}$) you can compute your inverse from that.

1

The most straightforward answer is that one can just convert a multivector into a matrix form and inverse.

A multivector matrix represents the multivector; addition and multiplication still work as usual. The multivector can be recovered from looking at one of the columns of the matrix.

$$a_1 + a_{e_1}e_1 + a_{e_2}e_2 + a_{e_{12}}e_{12} \rightarrow \begin{pmatrix} a_1 & a_{e_1} & a_{e_2} & -a_{e_{12}}\\ a_{e_1} & a_1 & a_{e_{12}} & -a_{e_2}\\ a_{e_2} & -a_{e_{12}} & a_1 & a_{e_1}\\ a_{e_{12}} & -a_{e_2} & a_{e_1} & a_1 \end{pmatrix}$$

You can see that the first column contains the components of the multivector, and that under multiplication with another multivector matrix, produces a result equivalent to regular multivector multiplication. We can multiply by $(1, 0, 0, 0)$ to get only the first column.

Gauss-Jordan elimination can be used on the following augmented matrix to find the inverse.

$$\begin{pmatrix} a_1 & a_{e_1} & a_{e_2} & -a_{e_{12}} & 1\\ a_{e_1} & a_1 & a_{e_{12}} & -a_{e_2} & 0\\ a_{e_2} & -a_{e_{12}} & a_1 & a_{e_1} & 0\\ a_{e_{12}} & -a_{e_2} & a_{e_1} & a_1 & 0 \end{pmatrix}$$

In this example, our inverse multivector, in column form, in the order, $1, e_1, e_2, e_{12}$, is:

$$\frac{1}{-a_1^2+a_{e_1}^2+a_{e_2}^2-a_{e_{12}}^2}\begin{pmatrix} -a_1 \\ a_{e_1}\\ a_{e_2}\\ a_{e_{12}}\\ \end{pmatrix}$$

1

With respect to the geometric product, if B is a non-null versor (could be a mixed signature GA), the inverse of B will be:

$$B^{-1} = \frac{B^\dagger}{B B^\dagger}$$

For a null-versor, the inverse with respect to the geometric product does not exist.

  • In the case of 4 dimensions, $B B^\dagger$ (multiplying B by the reverse of B) results in more than just scalar components, it also results in vector and pseudoscalar components, of the form $1 + x + y + z + w + xyzw$. I don't know how it's possible to divide $B^\dagger$ by this non-scalar value. – Aaron Franke Sep 21 '24 at 02:11