5

The following is a theorem:

If $A$ is a self-adjoint matrix (i.e. $A^\dagger = A$), then $U = e^{iA}$ is a unitary matrix.

This is easy to prove: $(e^{iA})^\dagger = e^{-iA^\dagger} = e^{-iA}$, with the last step a consequence of self-adjointness of $A$. Since powers of matrices commute with themselves, $e^{iA}$ commutes with $e^{-iA}$, and so we can write $e^{iA}e^{-iA}=e^{iA-iA}=e^{0}=I$, where $I$ is the identity matrix. This shows that $(e^{iA})^{-1}=(e^{iA})^\dagger$, and hence that $e^{iA}$ is unitary.

This is all well and good. However, the following (the converse) is not a theorem:

If $U$ is a unitary matrix, then the (a?) matrix $A$ defined by $e^{iA} = U$ is self-adjoint.

I have been led to believe that the best way to disprove this is by counterexample, but I have no idea how to construct a non-self-adjoint matrix $A$ for which $e^{iA}$ is still unitary.

An explicit counterexample would be great, but an approach to figuring out how to find a counterexample would be even beter.

senshin
  • 513
  • @DanielFischer In the 1-by-1 case (scalars), $e^{i(2\pi i)}=e^{-2\pi}$ isn't unitary (inverse is $e^{2\pi}$; conj. transpose is $e^{-2\pi}$, and those aren't equal). Am I misunderstanding you? – senshin Sep 26 '13 at 19:25
  • Oops, thought of $e^A$, sorry. – Daniel Fischer Sep 26 '13 at 19:28
  • 1
    $\large U^{\dagger}U = 1 \quad\Longrightarrow\quad {\rm e}^{-{\rm i}A^{\dagger}}{\rm e}^{{\rm i}A} = 1\quad$. In general, $\large{\rm e}^{-{\rm i}A^{\dagger}}{\rm e}^{{\rm i}A} \not= {\rm e}^{{\rm i}\left(-A^{\dagger} + A\right)}$ unless $\large\left[A, A^{\dagger}\right] = 0$. – Felix Marin Sep 26 '13 at 22:37
  • See Baker Campbell Hausdorff formula in wikipedia. – Felix Marin Sep 26 '13 at 22:45

2 Answers2

3

$U$ is unitary, hence diagonalisable. So, $A$ must be diagonalisable too, because matrix exponentials of nontrivial Jordan blocks are not diagonalisable. Let $A=PDP^{-1}$ be an eigendecomposition. It follows from $U=e^{iA}$ that diagonal entries of $D$ must be real. Hence $e^{iD}$ is a unit diagonal matrix.

But then $e^{iA}=Pe^{iD}P^{-1}$ is in general not unitary. To make $e^{iA}$ unitary without using a unitary $P$, one good way is to make $e^{iD}=I$, so that $U=e^{iA}=Pe^{iD}P^{-1}=I$ for all choices of $P$. Yet $D$ cannot be a scalar multiple of $I$, otherwise $A$ would become a scalar multiple of $I$, which is Hermitian. Therefore, $D$ should be chosen so that its diagonal entries are different integer multiples of $2\pi$. For such choices, practically almost all randomly chosen $P$ would make $A=PDP^{-1}$ become non-Hermitian. For instance, $$ A=\pmatrix{1&1\\ 0&1}\pmatrix{2\pi&0\\ 0&0}\pmatrix{1&1\\ 0&1}^{-1} =\pmatrix{2\pi&-2\pi\\ 0&0}\ \Rightarrow\ e^{iA}=I. $$

user1551
  • 149,263
0

To add to the great and simple counterexample of user1551 let me add the following related result:

Theorem. Given $A\in\mathbb C^{n\times n}$ the following statements are equivalent:

  1. There exist $t_1,t_2\in\mathbb R$ rationally independent such that $e^{iAt_1},e^{iAt_2}$ are both unitary
  2. $A$ is Hermitian

So while $e^{iA}$ can be unitary without $A$ being Hermitian (as seen above), if we knew additionally that, say, $e^{iA\sqrt2}$ is unitary, then we can in fact conclude that $A$ is Hermitian. In a way this is the weakest possible formulation of a converse to "$A$ Hermitian $\Rightarrow$ $e^{iA}$ unitary". One can probably prove the above result in an elegant manner using representation theory but I'll instead opt for a more basic/elementary proof at the core of which sits the following lemma:

Lemma. Let $D,X\in\mathbb C^{n\times n}$ and $t_1,t_2\in\mathbb R$ be given such that $D$ is Hermitian and $t_1,t_2$ are rationally independent. If $[X,e^{iDt_1}]=[X,e^{it_2D}]=0$, then $[X,D]=0$.

Proof. Because $D$ is Hermitian we can apply the spectral decomposition to obtain $D=\sum_{j=1}^m d_jP_j$ for some $m\in\mathbb N$, $d_1,\ldots,d_m\in\mathbb R$ pairwise distinct, and some spectral projections $P_1,\ldots,P_m$ (i.e. $P_j^*=P_j$, $P_jP_k=\delta_{jk}P_k$, and $\sum_{j=1}^m P_j={\bf1}$). Indeed if we can show that $P_jXP_k=0$ for all $j\neq k$, then we are done due to \begin{align*} [X,D]&=\Big(\sum_{j=1}^mP_j\Big)[X,D]\Big(\sum_{k=1}^mP_k\Big)\\ &=\sum_{j,k=1}^mP_jXDP_k-P_jDXP_k\\ &=\sum_{j,k=1}^m(d_k-d_j)\underbrace{P_jXP_k}_{\Rightarrow j=k}=\sum_{j=1}^m(d_j-d_j)P_jXP_j=0\,. \end{align*} Now for any $j\neq k$ we know by assumption that $$ P_jXP_k(e^{id_kt_1}-e^{id_jt_1})= P_j[X,e^{iDt_1}]P_k =0=P_j[X,e^{iDt_2}]P_k=P_jXP_k(e^{id_kt_2}-e^{id_jt_2}) $$ from which we can draw the desired conclusion if $e^{id_kt_1}\neq e^{id_jt_1}$ or $e^{id_kt_2}\neq e^{id_jt_2}$. But $e^{i(d_k-d_j)t_1}=1$ and $e^{i(d_k-d_j)t_2}=1$ cannot both be true because $(d_k-d_j)t_1,(d_k-d_j)t_2\in2\pi\mathbb Z$ (together with $d_j\neq d_k$ by construction) would imply that $t_1,t_2$ are rationally dependent, a contradiction. $\square$

Proof of theorem. 2. $\Rightarrow$ 1. is the known direction. For 1. $\Rightarrow$ 2. we first re-use the argument of user1551 that $A$ has to be diagonalizable, i.e. $A=SDS^{-1}$ for some $D\in\mathbb R^{n\times n}$ diagonal. Now we know that $e^{iAt_j}=Se^{iDt_j}S^{-1}$ is unitary for both $j=1,2$ which is equivalent to $(S^{-1})^*e^{-iDt_j}S^*Se^{iDt_j}S^{-1}={\bf1}$ $\Leftrightarrow$ $[S^*S,e^{iDt_j}]=0$. Because $t_1,t_2$ are rationally independent the previous lemma allows us to conclude that $[S^*S,D]=0$, as well. But this completes the proof: $$ A^*=(S^{-1})^*DS^*=(S^{-1})^*\underbrace{DS^*S}_{=S^*SD}S^{-1}=(S^{-1})^*S^*SDS^{-1}=SDS^{-1}=A\tag*{$\square$} $$

Frederik vom Ende
  • 5,187
  • 1
  • 11
  • 39