10

This is a subject I've been working on for a very long time now, but still did not manage to fully understand the interesting properties of this matrix. I have already asked a (viewed but unanswered) question about the same matrices (cf. here), but the question here is different.

First, let's define two matrices:

  • $\mathbf{N}$ is the following matrix: \begin{equation} \mathbf{N}=\begin{bmatrix} \mathbf{I}_n & \mathbf{0}_n \\ \mathbf{0}_n & \mathbf{P}^{-1}\begin{bmatrix}1 & && \\ & \ddots && \\ & & 1& \\ &&& -1 \end{bmatrix}\mathbf{P} \end{bmatrix}\in\mathbb{R}^{2n\times2n} \end{equation} where $\mathbf{P}\in\mathbb{R}^{n\times n}$ is any invertible matrix.

  • with $\omega_i>0$ and $t>0$, the block-diagonal matrix: \begin{equation} \mathbf{S}(t)=\begin{bmatrix} \begin{bmatrix} \cos(\omega_1t) & \\& \ddots & \\ & & \cos(\omega_n t) \end{bmatrix} & \begin{bmatrix} \dfrac{\sin(\omega_1t)}{\omega_1} & \\& \ddots & \\ & & \dfrac{\sin(\omega_nt)}{\omega_n} \end{bmatrix} \\ \begin{bmatrix} -\omega_1 \sin(\omega_1t) & \\& \ddots & \\ & & -\omega_n\sin(\omega_n t) \end{bmatrix} & \begin{bmatrix} \cos(\omega_1t) & \\& \ddots & \\ & & \cos(\omega_n t) \end{bmatrix}\end{bmatrix}\in\mathbb{R}^{2n\times2n} \end{equation}

The eigenvalues of $\mathbf N$ are of course 1 (multiplicity $2n-1$) and $-1$ (multiplicity $1$). The eigenvalues of $\mathbf{S}(t)$, which is an exponential matrix, are the $n$ couples of the complex conjugates $(\exp(i\omega_jt),\overline{\exp(i\omega_jt)})$.

Now, we can define $\forall t>0$, $\mathbf{A}(t)=\mathbf N\mathbf S(t)$. We know that the product of the eigenvalues of $\mathbf{A}(t)$ is the product of those of $\mathbf{N}$ and $\mathbf S(t)$, i.e. $-1$.

I observe an interesting property but can't prove where it stems from:

  • $1$ and $-1$ are eigenvalues of $\mathbf{A}(t)$ ($\forall t$);
  • $1$ and $-1$ are $\color{red}{\text{not}}$ eigenvalues of $\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2$, except maybe for specific values of $\mathbf P$ and $\omega_k$);
  • $1$ and $-1$ are eigenvalues of $\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2,t_3$);
  • $1$ and $-1$ are $\color{red}{\text{not}}$ eigenvalues of $\mathbf{A}(t_4)\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2,t_3,t_4$, except maybe for specific values of $\mathbf P$ and $\omega_k$);
  • $\dots$

I managed to prove $1$ and $-1$ are eigenvalues of $\mathbf{A}(t)$ by considering $\mathbf{S}(t)\pm\operatorname{diag}(1,\dots,1,-1,\dots,1)$, calculating its kernel, and building the appropriate vectors (without having to calculate them explicitly).

Also, I understand that the product of the eigenvalues of $\mathbf{A}(t_2)\mathbf{A}(t)$ is 1, while that of $\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ is -1, but that does not prove anything.


Questions

1) Any suggestion to prove the framed observation would be very welcome: why are apparently 1 and -1 eigenvalues of $\prod_{i=1}^m A(t_i)$ if and only if $m$ is odd?

2) Also, I have the impression that there exists a powerful mathematical framework to study these matrices, but I can't figure out which one, as not being a mathematician; Lie algebra because $\mathbf S(t)$ is an exponential? Galois groups because the eigenvalues are complex conjugate? Zariksi topology because @loup blanc mentioned it (see end of answer)?


Edit A simple Mathematica file to reproduce the results is available here. Just play with the arguments of calculateEigenvals to change the dimension $n$ or/and the exponent $m$ (to prove: 1,-1 eigenvalues iff $m$ is odd).

anderstood
  • 3,554
  • @user1551 I've added a link to a more general Mathematica file (any dimension, any exponent) -- the observations still stand. – anderstood Feb 28 '15 at 17:07
  • You do mean the lower right corner of $\mathbf{N}\text{ is }\mathbf{P}^{-1}\begin{bmatrix}1 & && \ & \ddots && \ & & 1& \ &&& -1 \end{bmatrix}\mathbf{P}=\mathbf{P}^{-1}(\mathbf{I}-\begin{bmatrix}0 & && \ & \ddots && \ & & 0& \ &&& 2 \end{bmatrix})\mathbf{P}=\mathbf{I}-2\mathbf{P}^{-1}\begin{bmatrix}0 & && \ & \ddots && \ & & 0& \ &&& 1 \end{bmatrix}\mathbf{P}$? – Mark Hurd Mar 03 '15 at 05:29
  • @MarkHurd That's right. Note that I got an answer on MO which I will give also here once I have understood everything. – anderstood Mar 03 '15 at 15:38
  • 1
    @user1551: That's not exactly what he say, but if you slightly modify the answer: consider $(D\oplus D)$, then $(D\oplus D)S(t)(D\oplus D)^{-1}=S(t)$ because diagonal blocks commute. What is not clear to me is why such a $(D\oplus D)$ matrix exists (on a dense set of matrices)---see question here. – anderstood Mar 05 '15 at 15:18
  • @user1551: Ok for $P$ in general. For $M$, it is not always possible, but it seems to be "usually" possible (usually meaning on a dense set I think). From some experiments on Mathematica, it seems that the information of line 1 contains that of lines $2,\dots,n$ ("usually"). So there would be $n$ equations ($(M-M^\top){1j}$) for $n$ variables ($D{ii}$), which would prove by dimension count that $D$ exists on a dense set of $P$, I believe. – anderstood Mar 05 '15 at 16:51
  • user1551: I'm OK with the fact that the set of all invertible matrices that are orthogonalisable via conjugation by a diagonal matrix is not dense in $M_n(\mathbb{R})$. But I don't see why the set ${P\text{ invertible such that } \exists D \text{ diagonal s.t.} DP\text{diag}(1,\dots,1,-1)P^{-1}D^{-1} \text{ is orthogonal}}$ is not dense in $M_n(\mathbb{R})$. And please receive my thanks rather than apologising. Get well soon. – anderstood Mar 05 '15 at 19:16
  • 1
    Cross-posted on MO under the same title. – user1551 Feb 18 '17 at 09:21

1 Answers1

2

Answer by Terry Tao on MO:


First, one should conjugate all matrices by $$ \begin{pmatrix} \operatorname{diag}(\omega_1,\dots,\omega_n) & 0 \\ 0 & 1 \end{pmatrix} $$ as this converts $S(t)$ to a rotation matrix while leaving the reflection $N$ unchanged.

The matrix $P^{-1} \operatorname{diag}(1,\dots,1,-1) P$ has a line as its $-1$ eigenspace and a hyperplane as its $+1$ eigenspace. By a limiting argument we may take $P$ to be in general position, so that the $-1$ eigenspace is not contained in any coordinate hyperplane and the $+1$ eigenspace does not contain any coordinate line. Then after applying a further conjugation by a diagonal matrix in the lower $n \times n$ block, we can arrange for the $-1$ and $+1$ eigenspaces to be orthogonal, without affecting $A(t)$.

After all these conjugations, $S(t)$ is now orthogonal and orientation preserving, while $N$ is orthogonal and orientation reversing. Then it is now clear that the product of any odd number of the $A(t)$ will be orientation-reversing orthogonal matrices. Such matrices have spectrum on the unit circle, symmetric with respect to conjugation, and multiplying to $-1$, hence must have an odd number of eigenvalues at $+1$ and also at $-1$. (If instead one multiplies an even number of $A(t)$ together, one obtains an even number of eigenvalues at $-1$ and at $+1$, but usually one expects to get zero eigenvalues at either.)