18

It is sometimes possible to multiply matrices of countably-infinite dimension. (Matrix multiplication is defined in the usual way, with rows and columns multiplied termwise and summed.) However, it turns out the associative property fails in general for infinite matrices, due to conditional convergence of infinite series.

Meanwhile, the octonions $\mathbb{O}$ are a unital nonassociative $8$-dimensional algebra that cannot be represented by $n\times n$ matrices (else they would associate). So it seems natural to ask, is is possible to represent $\mathbb{O}$ by infinite matrices?

I suppose one plan would be to take a finite-dimensional representation of the quaternions $\mathbb{H}$, "copy and paste" it into infinite matrices, and then find an infinite matrix for $\ell\in\mathbb{O}$ that squares to $-I$ and satisfies the rules of the Cayley-Dickson construction, but I don't see a way to do this.

(I suppose one could also generalize this question to arbitrary nonassociative algebras.)

anon
  • 155,259
  • Do you mean they would be associative instead of commutative? – Klaus Aug 11 '20 at 10:07
  • Yes. $\phantom{}$ – anon Aug 11 '20 at 10:09
  • 3
    I didn't read the whole paper you linked, but it looked like associativity failed in that case because you might not be able to compute some of the products, whereas in octonions I believe the products exist, they just aren't equal. I think that throws some doubt on your proposal. – JonathanZ Aug 11 '20 at 17:07
  • 3
    @JonathanZsupportsMonicaC - In the example on the first page, all products are defined, but they're still not equal. $V$ is a left and right inverse of $U$, and $A$ is a right inverse of $U$; we have $V(UA)=VI=V\neq A=IA=(VU)A$. – mr_e_man Aug 11 '20 at 21:08
  • @runway44 - The lack of associativity is not due to conditional convergence; indeed all series involved are finite (or they're "infinite" with only finitely many non-zero terms). – mr_e_man Aug 11 '20 at 21:12
  • There seems to be a mistake in Example 10. $CB$ is not column finite; its second column is all $1$'s, and so $A(CB)$ is not defined. – mr_e_man Aug 11 '20 at 22:03
  • @JonathanZsupportsMonicaC - Actually you may be right. The products involved in the associativity equation are defined, but $VA$ is not defined. – mr_e_man Aug 11 '20 at 22:14
  • @mr_e_man: Ehn, I think the products you quoted in your comment are a valid example of non-associativity. and $VA$ not being defined doesn't contradict that. But the whole problem of not all products being defined still looks like a barrier to the whole octonion embedding project. But maybe someone can find a subalgebra where all products are defined but you still encounter non-associativity. I'd love to see runway44 or someone else give it a try. – JonathanZ Aug 11 '20 at 22:47
  • 1
    @runway44 - Now I see what you meant by "conditional convergence": we're evaluating the infinite sum $\sum_{j,k}a_{ij}b_{jk}c_{kl}$ in two different ways ($j$ first or $k$ first). – mr_e_man Aug 11 '20 at 23:54
  • 1
    They can but for a very stupid reason: with infinite matrices you can design any finite multiplication table, a meaningful one like that of octonions, or a totally meaningless one. Just use induction assuming that the first $n-1$ rows and columns have already been constructed so that the $(n-1)\times (n-1)$ blocks are fine and find the $n$-th row and column in each matrix so that $n\times n$ blocks are fine. If you fail to do it yourself in a few days, I'll post the details, but it is rather simple, really ;-) – fedja Aug 21 '20 at 23:07

1 Answers1

3

OK, since nobody else wants to post it, I'll do it as promised.

We want to design several (in the case of octonions $7$ or $8$, depending on whether you want to necessarily represent $e_0$ by the identity matrix or not) matrices $E_j$ ($j=1,\dots n$) with the multiplication table of the type $E_iE_j=\varepsilon_{ij}E_{k(i,j)}$ where $\varepsilon_{ij}$ is some real number ($\pm 1$ in the octonion multiplication table) and $k(i,j)$ is some index depending on $i,j$.

We shall start with choosing pairwise distinct real numbers $r_{j,k}, c_{j,k}\in(0,1/2)$, $j=1,\dots,n$, $k=1,2,\dots$, and consider the matrices $A_j=(r_{j,k}^\ell c_{j,\ell}^k)_{k,\ell}$ whose $k$-th row and $\ell$-th column are geometric progressions with ratios $r_{j,k}$ and $c_{j,\ell}$ respectively. Of course, they don't give us what we want, but we shall make just finitely many corrections in each row and column to satisfy the equations. Note that all convergencies in row times column multiplications will then be even absolute, though, of course, we'll not be able to do the double summation in the triple product.

We suppose that for some $N$ (initially $1$) the first $N-1$ rows/columns in each matrix are already chosen so that the desired multiplication table equations are satisfied for the $(N-1)\times(N-1)$ blocks, i.e., for the rows $R_{i,p}$ (that notation stands for the $p$-th row/column of the $i$-th matrix) and columns $C_{j,q}$, we have $R_{i,p}\cdot C_{j,q}=\varepsilon_{i,j}(E_{k(i,j)})_{p,q}$ for all $p,q\le N-1$. We now need to modify the $N-th$ row $R_i$ and column $C_i$ (I'll skip the index $N$) in each matrix so that they would satisfy the system $$ R_i\cdot C_{j,p}=\alpha_{i,j,p}: i,j=1,\dots,n,\ p\le N-1\,; \\ R_{j,p}\cdot C_{j}=\beta_{i,j,p}: i,j=1,\dots,n,\ p\le N-1\,; \\ R_i\cdot C_j=\gamma_{i,j}: i,j=1,\dots,n $$ where $\alpha_{i,j,p},\dots,\gamma_{i,j}$ are some prescribed real numbers. We shall do all modifications only beyond the $N$-th position, so the $N\times N$ block of each matrix is treated as known here.

To do it, choose disjoint finite subsets of integers $E$ and $E_{i,j}$ of cardinalities $|E|=n(N-1), |E_{i,j}|>2n(N-1)$ that lie so far away that the initial geometric progressions in rows $R_{i,p}$ and columns $C_{i,p}$, $i=1,\dots,n; p=1,\dots, N-1$ were not disturbed there during the previous steps. Now set all elements in $R_i$ and $C_i$ at the positions from $E\cup \bigcup_{i,j}E_{i,j}$ to $0$ and look at the equations. Most likely, all of them will be wrong. However, we can correct the first set (the one with $\alpha_{i,j,p}$) now by modifying each $R_i$ on $E$ appropriately (the corresponding linear systems will have Vandermond matrices, so they'll all be non-degenerate). Similarly, we can correct the second set (the one with $\beta_{i,j,p}$) by modifying $C_j$ in the positions from $E$.

Now we need to correct the last set of equations without spoiling the first two. To this end, we will change on each sets $E_{i,j}$ the entries of $R_i$ and $C_j$. For each such set we will find a non-zero vector $v_{i,j}$ such that it is orthogonal to all vectors determined by the positions from $E_{i,j}$ in the first $N-1$ rows and columns of all matrices (which is possible because we have $|E_{i,j}|>2n(N-1)$) and place this vector with some appropriate coefficients in the positions from $E_{i,j}$ into $R_i$ and $C_j$. This will correct the equation for $R_i\cdot C_j$ without affecting any other equation. After doing this for all $i,j$, we shall end with all equations satisfied, i.e., with matrices for which $N\times N$ blocks are good.

fedja
  • 19,348