6

Let $S$ be the space of all $n \times n$ real skew symmetric matrices and let $Q$ be a real orthogonal matrix. Consider the map $T_Q: S \to S$ defined by $$T_Q(X) = QXQ^T.$$ Find $\operatorname{det} T_Q$.

I thought about diagonalizing $Q$, but I don't think we know it is real diagonalizable. I can show it is an isometry using the Hilbert-Schmidt inner product, but I can't really relate it to the determinant of $Q$ (I've seen posts here that say the determinant should be $\operatorname{det}Q^{n-1}$). So all I know is that $\operatorname{det} T_Q = \pm1$. How would I find $\det T_Q$?

  • What is the transpose of $T_{Q}(X)? Isn't it negative of itself, therefore zero? – RobertTheTutor Mar 03 '21 at 01:37
  • 1
    No, it means it is a real skew symmetric matrix. – INQUISITOR Mar 03 '21 at 01:41
  • Ah yes, of course. My bad. Next question, isn't $det T_{Q}$ automatically +1? Because $det (ABC) = det(A) det(B) det(C)$, and shouldn't $Q$ and $Q^{T}$ have the same determinant? So it wouldn't matter if it were -1 or +1, it would come out to be 1 overall. – RobertTheTutor Mar 03 '21 at 01:48
  • @RobertTheTutor The determinant of $T_Q(X)$ for a given matrix $X$ has no direct relation to the determinant of the overall linear map $T_Q$. – Ben Grossmann Mar 03 '21 at 01:49
  • 1
    @RobertTheTutor I would recommend that you put some detailed thought into the case of $n = 2$. – Ben Grossmann Mar 03 '21 at 01:50
  • 1
    @RobertTheTutor Well, I am asking about the determinant of the operator $T_Q$, which is acting on a $\frac{n(n−1)}{2}$-dimensional vector space. – INQUISITOR Mar 03 '21 at 01:58
  • 1
    @INQUISITOR I misread your question, sorry about that – Ben Grossmann Mar 03 '21 at 02:08

3 Answers3

2

revised proof:
I revised this into $2$ distinct proofs of the result, one analytic and one algebraic. In both cases, the key insight comes from examining a very simple reflection matrix
$D:= \displaystyle \left[\begin{matrix}-1 & \mathbf 0 \\ \mathbf 0 & I_{n-1}\end{matrix}\right]$

and computing
$T_D\mathbf B = \mathbf B \displaystyle \left[\begin{matrix}-I_{n-1} & \mathbf 0 \\ \mathbf 0 & I_{\binom{n}{2}-(n-1)}\end{matrix}\right]=\mathbf BA$
where $\mathbf B$ is a collection of well-chosen (skew symmetric matrix) basis vectors. This computation is shown at the very end under "Computing $T_D\mathbf B$"

The conclusion ultimately is that $\det\big(T_Q\big) =\operatorname{det}\big(Q\big)^{n-1}$

1.) analytic proof:
the determinant is a (real) continuous function and integer valued (taking on values in $\big\{-1,+1\big\}$ as identified by OP), hence constant on any connected component. (I.e. if maps in this group are path-connected, then they have the same determinant.)
Case 1:
$\det\big(Q\big)=1$
Then $Q$ is path connected to the identity, so $T_Q$ is path-connected to $T_I$ (where $T_IX = IXI=X$). Thus, $\det\big(T_Q\big) =\det\big(T_I\big) =1$.

Case 2:
$\det\big(Q\big)=-1$
$Q$ is path connected to $D$, so $T_Q$ is path connected to $T_D$ and
$\det\big(T_Q\big)=\det\big(T_D\big)=(-1)^{n-1}$

2.) algebraic proof:
observe that for $Q_1, Q_2 \in O_n\big(\mathbb R\big)$
$T_{(Q_1Q_2)}X = Q_1Q_2XQ_2^T Q_1^T=T_{Q_1}T_{Q_2}X$
and in the case of $Q_1=Q_2^T$ then $T_{Q_1}=T_{Q_2}^{-1}$

(ignoring the trivial $n=1$ case) for arbitrary $Q \in O_n\big(\mathbb R\big)$, first decompose $Q$ into $r$ Householder matrices $H_j$, for some $1\leq r\leq n$, where we know $r$ is even if $\det\big(Q\big)=1$ and odd if $\det\big(Q\big)=-1$. In the below $U_k$ is some orthogonal matrix and $W_k$ is an orthogonal matrix as well, each of the appropriate dimension

$T_Q\mathbf B$
$= T_{(H_1H_2\cdots H_r)}\mathbf B$
$= T_{H_1}T_{H_2}\cdots T_{H_r}\mathbf B$
$= T_{U_1DU_1^T}T_{U_2DU_2^T}\cdots T_{U_rDU_r^T}\mathbf B$
$= \big(T_{U_1}T_DT_{U_1}^{-1}\big)\big(T_{U_2}T_{D}T_{U_2}^{-1}\big)\cdots \big(T_{U_r}T_DT_{U_r}^{-1}\big)\mathbf B$
$= \mathbf B \big(W_1 A W_1^{-1}\big)\big(W_2 A W_2^{-1}\big) \cdots \big(W_r A W_r^{-1}\big)$
$\implies \det\big(T_Q\big) = \det\big(W_1 A W_1^{-1}\big)\det\big(W_2 A W_2^{-1}\big) \cdots \det\big(W_r A W_r^{-1}\big)=\det\big(A\big)^r = \Big((-1)^{n-1}\Big)^r$
which is to say $\det\big(T_Q\big)=1$ if $n$ is odd and/or $\det\big(Q\big) = 1$, and
$\det\big(T_Q\big)=-1$ in the case of even $n$ and $\det\big(Q\big) =-1$.

The technique here is to examine some group homomorphism (determinant) by decomposing the group into its generators (Householder matrices) and then examine how the generators look under the homomorphism.


Computing $T_D\mathbf B$:

$D:= \displaystyle \left[\begin{matrix}-1 & \mathbf 0 \\ \mathbf 0 & I_{n-1}\end{matrix}\right]$
now construct a simple basis for your space of real skew symmetric matrices (where $\mathbf e_k$ is the kth standard basis vector in $\mathbb R^n$).

$v_1 := \mathbf e_1\mathbf e_2^T-\big(\mathbf e_1\mathbf e_2^T\big)^T$
$v_2 := \mathbf e_1\mathbf e_3^T-\big(\mathbf e_1\mathbf e_3^T\big)^T$
$v_3 := \mathbf e_1\mathbf e_4^T-\big(\mathbf e_1\mathbf e_4^T\big)^T$
$\vdots$
$v_\binom{n}{2} := \mathbf e_{n-1}\mathbf e_{n}^T-\big(\mathbf e_{n-1}\mathbf e_{n}^T\big)^T$
(the pattern is hopefully clear -- each matrix is all zeros, except for a single 1 and a single negative one)

collect these in
$\mathbf B:= \bigg[\begin{array}{c|c|c|c|c} v_1 & \cdots &v_{n-1}& v_n &\cdots & v_\binom{n}{2}\end{array}\bigg]$
(Artin would refer to this as a 'hyper-vector' though not many other texts use this term)

Now by inspection $T_D$ leaves all vectors (skew matrices) unchanged if they have all zeros in the first column/row. Conversely $T_D$ negates a vector (skew matrix) that only has non-zero components in its first column/row (where we recall that since the vector space is of real skew matrices, the diagonals are of course zero). Put differently

for $k\in\big\{1,2,...,n-1\big\}$
$T_Dv_k = -v_k$
(i.e. all skew-symmetric matrix basis vectors that have a one in the first row)

and for $k\in\big\{n, n+1,...,\binom{n}{2}\big\}$
$T_Dv_k = v_k$

to conclude
$T_D\mathbf B = \mathbf B \displaystyle \left[\begin{matrix}-I_{n-1} & \mathbf 0 \\ \mathbf 0 & I_{\binom{n}{2}-(n-1)}\end{matrix}\right]= \mathbf BA$

$\implies\det\big(T_D\big)= \det\big(A\big)= (-1)^{n-1}\cdot 1= (-1)^{n-1}$

user8675309
  • 12,193
1

Let $A$ be a generic $n\times n$ matrix, so that its elements are $n^2$ indeterminates. Let $R=\mathbb Z[a_{11},a_{12},\ldots,a_{nn}]$. Then $M_n(R)$ is an $R$-module. Denote the submodules of all symmetric and skew-symmetric matrices in $M_n(R)$ by $\mathcal H$ and $\mathcal K$ respectively. Clearly, $\mathcal B_1=\{E_{ii}:1\le i\le n\}\cup\{E_{ij}+E_{ji}:1\le i<j\le n\}$ is a basis of $\mathcal H$, $\mathcal B_2=\{E_{ij}-E_{ji}:1\le i<j\le n\}$ is a basis of $\mathcal K$ and their disjoint union $\mathcal B=\mathcal B_1\sqcup\mathcal B_2$ is a basis of $M_n(R)$.

Define a linear map on $M_n(R)$ by $f:X\mapsto AXA^T$. Then $\mathcal H$ and $\mathcal K$ are invariant under $f$. By considering the matrix representation of $f$ with respect to the basis $\mathcal B$ or otherwise, we see that $\det f, \det f|_{\mathcal H}, \det f|_{\mathcal K}$ are elements of $R$ — that is, they are polynomials in $a_{11},a_{12},\ldots,a_{nn}$ with integer coefficients — and $\big(\det f|_{\mathcal H}\big)\big(\det f|_{\mathcal K}\big)=\det f=\det(A\otimes A)=(\det A)^{2n}$.

We now prove by mathematical induction on $n$ that $\det(A)$ is an irreducible element in $R$. The base case $n=1$ is trivial. In the inductive case, suppose $\det(A)$ is a product of two non-constant factors. Then one of them must be a linear polynomial in $a_{11}$ and the other has not any occurrence of $a_{11}$, because $\det(A)$ is a linear polynomial in $a_{11}$. That is, $\det(A)=(a_{11}p+r)q$ for some polynomials $p,q$ and $r$ such that $a_{11}$ does not appear in any of them and $q$ is non-constant. Therefore $pq=m_{11}$, the $(1,1)$-th minor of $A$. Since $m_{11}$ by induction assumption is irreducible and $q$ is non-constant, we must have $q=\pm m_{11}$. In turn, $m_{11}$ divides $\det(A)$. Yet this is impossible, because when $A$ is partially specialised to the matrix $$ \pmatrix{a_{11}&a_{12}\\ &a_{22}&1\\ &&\ddots&\ddots\\ &&&\ddots&1\\ 1&&&&a_{nn}}, $$ $\det(A)=a_{11}a_{22}\cdots a_{nn}\pm a_{12}$ is not divisible by $m_{11}=a_{11}a_{22}\cdots a_{nn}$. Hence $\det(A)$ must be an irreducible element in $R$.

It follows that in the generic case, we have $\det f|_{\mathcal K}=\pm(\det A)^k$ and $\det f|_{\mathcal H}=\pm(\det A)^{2n-k}$ for some integer $k$ between $0$ and $2n$. Yet, when $A$ is partially specialised to $\operatorname{diag}(a_{11},a_{22},\ldots,a_{nn})$, we may directly compute the determinants of the matrix representations of $f|_{\mathcal K}$ and $f|_{\mathcal H}$ with respect to $\mathcal B_1$ and $\mathcal B_2$ as $(\det A)^{n-1}$ and $(\det A)^{n+1}$ respectively. Hence we must have $k\ge n-1$ and $2n-k\ge n+1$ in the generic case, meaning that $k=n-1$ and $$ \det f|_{\mathcal K}=(\det A)^{n-1}. $$ Since this holds in for a generic matrix $A$, it also holds when $A$ is specialised to $Q$. Therefore $\det T_Q=(\det Q)^{n-1}$. The orthogonality of $Q$ is irrelevant.

user1551
  • 149,263
0

Partial Answer: I will focus on the case that $n$ is even. The odd case can be handled similarly.

As a consequence of the block-diagonalizability of skew-symmetric matrices and the fact that $Q = \exp(P)$ holds for some skew-symmetric matrix $P$, we can show that there exists an orthogonal matrix $W$ such that $$ WQW^T = D:= \pmatrix{A_1\\ & \ddots \\ && A_k}, \quad A_k = \pmatrix{a_k & -b_k\\ b_k & a_k}. $$ Note that $T_Q = T_W \circ T_D \circ T_W^{-1}$, so that $\det(T_Q) = \det(T_D)$. Decompose $D$ into a product $D = D_1 \cdots D_k$ where $$ D_1 = \pmatrix{A_1 \\ & I \\ & & \ddots \\ &&& I}, \dots, \quad D_k = \pmatrix{I \\ & \ddots \\ & & I \\ &&& A_k}. $$ Note that $T_D = T_{D_1} \circ \cdots \circ T_{D_k}$. With that, it suffices to determine $\det T_{D_k}$.


Another approach is to use the fact that there exists a skew-symmetric matrix $M$ for which $Q = \exp(M)$ and that $T_Q = \exp(C_M)$, where $$ C_M(X) = MX - XM. $$ From there, we can use the fact that $$ \det(T_Q) = \det(\exp(C_M)) = \exp(\operatorname{tr}(C_M)). $$ Once we show that $\operatorname{tr}(C_M) = (n-1)\operatorname{tr}(M)$, it follows that $$ \begin{align} \det(T_Q) &= \exp(\operatorname{tr}(C_M)) = \exp((n-1)\operatorname{tr}(M)) = \exp(\operatorname{tr}(M))^{n-1} \\ & = \det(\exp(M))^{n-1} = \det(Q)^{n-1}. \end{align} $$

Ben Grossmann
  • 234,171
  • 12
  • 184
  • 355
  • Do we know that $Q$ is orthogonally diagonalizable? I thought we only know it is unitarily diagonalizable. – INQUISITOR Mar 03 '21 at 02:51
  • 1
    @INQUISITOR Orthogonally block diagonalizable, but not necessarily diagonalizable. – Ben Grossmann Mar 03 '21 at 02:52
  • I see, but that's only when it's a $2n \times 2n$ matrix. Also, how do you know that $Q = e^P$ for some skew symmetric matrix $P$? I've haven't come across that fact yet. – INQUISITOR Mar 03 '21 at 03:01
  • @user You’re right – Ben Grossmann Mar 03 '21 at 12:14
  • @Inquisitor in fact, it only holds for orthogonal matrices with determinant $1$. I don’t remember how the proof goes off hand – Ben Grossmann Mar 03 '21 at 12:17
  • Btw, what is the issue if we consider $T_U$, where $U$ is a unitary matrix? What I'm asking is, what is wrong with considering the unitary diagonalization of $Q$, and working with that instead? Would the determinant not be the same? – INQUISITOR Apr 14 '21 at 18:56
  • @INQUISITOR You have to be a bit careful there. For instance, $T_U$ would no longer define a map over the real skew-symmetric matrices. – Ben Grossmann Apr 15 '21 at 16:22
  • Would it not still act on real skew symmetric matrices? I am having difficulty not relating this to say $\begin{pmatrix} 0&1\ 1&0 \end{pmatrix}$ and the similar matrix $\operatorname{diag}[i,-i]$ – INQUISITOR Apr 15 '21 at 16:37
  • If $U$ is a unitary matrix and $A$ is real and skew symmetric, we cannot guarantee that $U^*AU$ is real and skew symmetric – Ben Grossmann Apr 15 '21 at 16:38
  • Would the map not send $A$ to $UAU^T$, and $(UAU^T)^T = -UAU^T$? The output may not be real skew symmetric anymore, but it would be skew symmetric no? – INQUISITOR Apr 15 '21 at 16:44
  • 1
    You're right, it should be a transpose rather than a conjugate transpose. It would not be real, and that is significant. Ultimately, what you're trying to do is make a statement about $T_Q$ using the properties of the induced map over the complexification of $S$, which is fine but requires care. – Ben Grossmann Apr 15 '21 at 16:48
  • Ok, so is the idea that the determinant of $T_Q$ is equal to the determinant of the induced map of $T_Q$ over the complexification of $S$ correct in this case? I'm not too familiar with complexification of a real vector spaces, so I'd just like to know before I start reading up on it. – INQUISITOR Apr 15 '21 at 16:56
  • @INQUISITOR Yes, that's right. – Ben Grossmann Apr 15 '21 at 16:58
  • @ Ben Grossmann Thank you. – INQUISITOR Apr 15 '21 at 17:00