1

Let $\mathcal{A}\subset\mathrm{M}_n(\mathbb{C})$ be a subset of $n\times n$ complex matrices.

Question: Under what conditions may one find unitary matrices $U$ and $V$ such that $UAV$ is a diagonal matrix for each matrix $A\in\mathcal{A}$?


This is not the usual problem of simultaneous diagonalization, since we are not requiring that $V=U^*$, which is why I'm calling this the problem of "simultaneous bi-diagonalization".

I believe we may be able to find necessary and sufficient conditions for the family $\mathcal{A}$ to be simultaneously bi-diagonalizable that depend on the sets $$ \mathcal{A}^*\mathcal{A} = \{A^*B\, :\, A,B\in\mathcal{A}\} \quad\text{and}\quad \mathcal{A}\mathcal{A}^* = \{AB^*\, :\, A,B\in\mathcal{A}\}. $$ Indeed, if $U$ and $V$ are matrices such that each matrix in $U\mathcal{A}V$ is diagonal, then the families $V^*\mathcal{A}^*\mathcal{A}V$ and $U\mathcal{A}\mathcal{A}^*U^*$ are diagonal, which implies that the sets $\mathcal{A}^*\mathcal{A}$ and $\mathcal{A}\mathcal{A}^*$ are families of normal commuting matrices. This leads to the following conjecture:

Conjecture: The family $\mathcal{A}$ is simultaneously bi-diagonalizable if and only if both of the families $\mathcal{A}^*\mathcal{A}$ and $\mathcal{A}\mathcal{A}^*$ are commuting.

How might one prove this?

Note that one may suppose without loss of generality that $\mathcal{A}=\mathrm{span}(\mathcal{A})$ (i.e., $\mathcal{A}$ is a linear subspace of matrices).


If $\mathcal{A}$ contains at least one invertible matrix $A$, then it suffices to assume that only $\mathcal{A}^*\mathcal{A}$ is a commuting family.

Partial proof of the conjecture: Suppose that $\mathcal{A}^*\mathcal{A}$ is a commuting family and that $\mathcal{A}$ contains at least one invertible matrix. Note that each matrix in $\mathcal{A}^*\mathcal{A}$ is normal, since $A^*B\in\mathcal{A}^*\mathcal{A}$ implies $(A^*B)^*=B^*A\in\mathcal{A}^*\mathcal{A}$ and these matrices must commute. Hence we may assume without loss of generality that $A^*B$ is diagonal for each $A,B\in\mathcal{A}$.

Let $A\in\mathcal{A}$ be an invertible matrix and let $U$ be a unitary matrix and $P$ be a positive defintite matrix such that $A=UP$ is the polar decomposition of $A$. It holds that $P$ is a diagonal matrix since $P^2 = A^*A$ is assumed to be diagonal. For each matrix $B\in\mathcal{A}$, one has that $$ PU^*B = A^*B $$ is a diagonal matrix. Since $P$ is diagonal and invertible and $PU^*B$ is diagonal, it follows that $U^*B$ is diagonal. Hence each matrix in $U^*\mathcal{A}$ is diagonal.


This leads to a few questions:

  • How might one proceed if we cannot assume that $\mathcal{A}$ contains an invertible matrix?
  • If we were to assume further that there exist matrices $A_1,\dots,A_N\in\mathcal{A}$ such that $A_1^*A_1+\cdots+A_N^*A_N = I$, is this enough to show that $\mathcal{A}$ is bi-diagonalizable? (Edit: Yes! From the answer to my related question, we see that this implies the existence of an invertible matrix in $\mathrm{span}(\mathcal{A})$ and we may use the partial proof above.)
  • Obsevation: if $\mathcal A$ is a subspace not containing an invertible matrix and and $\mathcal A^\mathcal A$ commutes, then we can assume without loss of generality that the (diagonal) matrices of $\mathcal A^\mathcal A$ only have non-zero entries in the $r$-leading diagonal entries. Correspondingly, only the first $r$ columns of a matrix $A \in \mathcal A$ can be non-zero. – Ben Grossmann Jan 10 '20 at 18:08
  • Another way of framing the above observation: $\mathcal A$ is essentially a subspace of $n \times r$ matrices. – Ben Grossmann Jan 10 '20 at 18:12
  • Actually, your partial proof can be extended to the case where $\mathcal A$ has no invertible matrices; you just need to select an $A \in \mathcal A$ of maximal rank. – Ben Grossmann Jan 10 '20 at 18:14

0 Answers0