2

Given $A \in M_{n \times n}(\mathbb{C})$, how do I show that $e^{A} = I_{n} \iff$ $A$ is diagonalizable with eigenvalues in $2\pi i \mathbb{Z}$. I know that if $A = PDP^{-1}$ then $e^{A} = Pe^{D}P^{-1}$, but I am not sure how this helps. Moreover, the canonical definition is $e^{A} = \sum_{j = 0}^{\infty} \frac{1}{j!}A^{j}$. I suppose given what I have said that $A$ is diagonalizable with eigenvalues in $2 \pi i \mathbb{Z}$ implies that $e^{A} = I_{n}$, but what about the forward implication?

user100101212
  • 2,186
  • 8
  • 13
  • It should be sufficient use the Jordan canonical form of $A$. – Helmut Dec 19 '19 at 20:55
  • Can you elaborate? – user100101212 Dec 19 '19 at 20:55
  • @user100101212 You can at least show that all eigenvalues are of the form required, by considering an eigenvalue, eigenvector and multiplying $A$ $j$ times and dividing by $j!$, then summing. – Raito Dec 19 '19 at 20:56
  • @user100101212 If $J_1,...J_k$ are the diagonal blocks of a JCF and, say, $P^{-1}AP$ is this JCF, then $e^{J_k}$ must all be equal to the identity. Now we can use that $J_k$ is triangular... – Helmut Dec 19 '19 at 20:59
  • @Raito Are you suggesting that let $\lambda$ be an eigenvalue of $A$ with eigenvector $v$. Then $v = e^{A} v =\sum_{j = 0}^{\infty} \frac{1}{j!} A^{j}v = \sum_{j = 0}^{\infty} \frac{\lambda^{j}}{j!} v = e^{\lambda}v \implies \lambda = 2 \pi i \mathbb{Z}$. Then it should be sufficient that $e^{A} = Pe^{D}P^{-1} = PP^{-1} = I$. Does this work? Moreover, can we assume that $A = PDP^{-1}$ for any matrix? – user100101212 Dec 19 '19 at 21:12
  • @user100101212 Not all matrix are diagonalizable. You still need to show diagonalizability. At least, you know that $A$ is trigonalizable. – Raito Dec 19 '19 at 21:16
  • @Helmut I am not sure how to show that $A$ must be diagonalizable. Could it be something like if one of the Jordan blocks is not a single, then the exponential is not a diagonal matrix? – user100101212 Dec 19 '19 at 21:48

3 Answers3

1

For a somewhat lighter weight solution that just uses basic results about
0.) the preimage of 1 under the exponential map is $0 + 2\pi i n$ which implies these are the eigenvalues of $A$ (integer n)
1.) block triangular matrix multiplication
2.) linear independence of powers of nilpotent matrices
3.) with scalars in $\mathbb C$, any matrix is similar to an upper triangular one, say using Schur's Triangularization Theorem (though Jordan form works here of course)
4.) for commuting matrices $A$ and $B$, $e^{A+B} = e^A e^B$. This is typically developed in any text introducing the matrix exponential (and of course is implied e.g. by the Lie Product Formula).

for (1)
The needed fact is that
$\begin{bmatrix} R & *\\ \mathbf 0 & Y \end{bmatrix}^k = \begin{bmatrix} R^k & *\\ \mathbf 0 & Y^k \end{bmatrix}$
by direct multiplication, where $*$ denotes entries we are not concerned with

for (2)
for any non-zero n x n nilpotent matrix we have $N^j = \mathbf 0$ for some $1 \lt j \leq n$, where $N^{j-1} \neq 0$
then the powers $N^k$ for $1\leq k \lt j$ are linearly independent i.e.
$\sum_{k=1}^{j-1} \alpha_k N^k = \mathbf 0 \longrightarrow$ each $\alpha_k = 0$
to prove this consider that if this weren't true, then there is some non-trivial linear combination equal to zero and we can isolate the lowest power with non-zero coefficient (call the power $m$) and write it as a linear combination of powers $\geq m+1$. Multiply each side by $N^{j-1-m}$ and we have $\alpha_m N^{j-1} = \mathbf 0$, a contradiction.

It follows that $e^{N}-I = \sum_{k=1}^\infty \frac{N^k}{k!} = \sum_{k=1}^j \frac{1}{k!}N^k \neq \mathbf 0$
and per what follows in 3, we find $N$ is similar to a strictly upper triangular matrix and $e^N$ is similar to a non-diagonal upper triangular matrix with ones on the diagonal.

for (3) by Schur's Triangularization theorem, any matrix in $\mathbb C$ is unitarily similar to an upper triangular one. The decomposition need not be unique -- it is sufficient for our purposes.

main argument:
if $e^A = I$ and $A$ is diagonalizable, then the argument works as outlined in the original post.

now suppose for a contradiction that $A$ is defective but it is still true that $e^A = I$.
We may select any eigenvalue with insufficient geometric multiplicity and name it $\lambda_1$ with algebraic multiplicity of $r$ but
$1 \leq \text{geometric multiplicity of } \lambda_1 =g\lt r$

Applying Schur Triangularization we may select the block in the top left corner to be an $r x r$ upper triangular matrix with $\lambda_1$ on the diagonal. So
$U^{-1} A U = U^* A U = \begin{bmatrix} R_r & *\\ \mathbf 0 & Y_{n-r} \end{bmatrix} = \begin{bmatrix} R_r & *\\ \mathbf 0 & * \end{bmatrix}$
where we need only focus on the top left block matrix

Then
$\big(U^* A U\big)^k = U^* A^k U = \begin{bmatrix} R_r^k & *\\ \mathbf 0 & * \end{bmatrix}$ and

$I = U^* I U = U^* e^{A} U = e^{U^*A U} = \begin{bmatrix} e^{R_r} & *\\ \mathbf 0 & * \end{bmatrix} = \begin{bmatrix} I_r & *\\ \mathbf 0 & * \end{bmatrix}$

But $R_r = \lambda_1 I + N$ i.e. a scaled form of the identity matrix and a non-zero nilpotent (strictly upper triangular) matrix. These commute.

$N$ is strictly upper triangular by construction, and non-zero because the the geometric multiplicity of $\lambda_1$ is strictly less than the algebraic multiplicity. (i.e. we have $g \lt r$ linearly independent eigenvectors with eigenvalue $\lambda_1$ and we may, via gramm schmidt, select them to be mutually orthonormal but for the $r-g$ remaining eigenvalues there must be non-zero components above the diagonal -- if there wasn't, then $\text{dim null}\big(\lambda_1 I- A\big) \gt r$.)

Thus
$e^{R_r} = e^{\lambda_1 I + N} = e^{\lambda_1 I}e^{N} = e^{\lambda_1} I e^{N}= e^{\lambda_1} e^{N} = 1 \cdot e^{N} = e^N \neq I$
which is a contradiction that follows from (2)

Jordan Canonical Form would streamline the above, though it involves a lot more machinery.

user8675309
  • 12,193
0

Partial answer :

$\DeclareMathOperator{\tr}{\textrm{trace}}$

If $\exp(A) = I_n$.

Let be $\lambda$ an eigenvalue of $A$.

There is an eigenvector $x$ such that $Ax = \lambda x$.

So that: $\forall j \in \mathbb{N}, \dfrac{A^j}{j!}x = \dfrac{\lambda^j}{j!} x$.

Now, we have: $\exp(A)x = \exp(\lambda) x$ by sum of the previous relation.

But, $\exp(A) = I_n$, so that: $I_n x = x = \exp(\lambda) x$.

Thus: $\exp(\lambda) = 1$.

Thus: $\lambda = 2\pi i k, k \in \mathbb{Z}$.

Thus: all eigenvalues of $A$ are contained in $2\pi i\mathbb{Z}$.

Raito
  • 1,940
  • 11
  • 17
0

Every matrix can be put in Jordan canonical form, i.e. there exist an (invertible) $S$ such that

$$ S^{-1} A S = D+ N $$

where $D$ is diagonal, $N$ is nilpotent (with a certain structure) and they commute with each other.

Hence you have

\begin{align} e^{A} &= I \\ \Leftrightarrow S^{-1} e^{A} S &= I \\ \Leftrightarrow e^{D+N} &= I \end{align}

Now it turns out that $$ e^{D+N} = \sum_{i=1}^s \left [ e^{\lambda_i} |i\rangle \langle i| + \sum_{n=1}^{m_i-1} \frac{e^{\lambda_i}}{n!} D_i^n \right ] $$

where $s$ is the dimension of the matrices, $|i\rangle\langle i|$ is the matrix with elements $a,b$ equal to $\delta_{a,b} \delta_{a,i}$, $D_i$ is a Jordan block and $m_i$ is the dimension of the Jordan block.

The point now is that all the matrices $D_i$ are upper triangular and so are their powers. So in order to have

$$ e^{D+N} = I $$

you must have $\lambda_i = 2\pi n_i $ with $n_i \in \mathbb{Z}$ and $m_i=1$ (i.e. there are no Jordan blocks and the matrix is diagonalizable).

You can read more on Jordan canonical form (and exponential) in this other MSE question:

Matrix exponential for Jordan canonical form

lcv
  • 2,730