2

I have the matricial differential equation:

$$\frac{\text{d}}{\text{d}t}\vec{x}(t)=A \cdot \vec{x}(t)$$

where:

$$\vec{x}(t) \in \mathbb{R}^3, \quad A=\begin{pmatrix} a & 0 & b \newline 0 & c & 0 \newline d & 0 & a \end{pmatrix}, \quad a,b,c,d \in \mathbb{R}$$

I need to find $a,b,c,d \in \mathbb{R}$ such that:

$$\lim_{t \to +\infty} \vec{x}(t)=\vec{0}$$

for every initial condition $\vec{x}(0) \in \mathbb{R}^3$.

The very short solution provided by the text is that the eigenvalues of the matrix $A$ must have negative real part. I can't undetstand why.


What I have done so far:

I know that the solution can be written in the form:

$$\vec{x}(t)=e^{tA} \cdot \vec{x}(0)$$

and I can easily find the eigenvalues of $A$:

$$c, \quad a-\sqrt{bd}, \quad a+\sqrt{bd}$$

If the real part must be negative then we must have:

  1. $c<0$;
  2. $a<0 \quad \text{if} \quad bd<0$;
  3. $a<-\sqrt{bd} \quad \text{if} \quad bd>0$.
Leonardo
  • 751

1 Answers1

3

If the matrix $A$ can be diagonalized then it can be written as

$$ A = U\Lambda U^{-1} $$

where $U$ is the matrix formed with the eigenvectors of $A$ and $\Lambda$ is the diagonal matrix formed with its eigenvalues.

$$ \Lambda = \pmatrix{\lambda_1 & 0 & \cdots & 0\\ 0 & \lambda_2 & \cdots & 0 \\ 0 & 0 & \cdots & \lambda_n } $$

What's interesting about this representation is

\begin{eqnarray} A &=& U\Lambda U^{-1} \\ A^2 &=& (U\Lambda U^{-1})(U\Lambda U^{-1}) = U\Lambda^2 U^{-1}\\ &\vdots& \\ A^k &=& U\Lambda^k U^{-1} \end{eqnarray}

so that

$$ e^A = \sum_k \frac{A^k}{k!} = \sum_k\frac{U\Lambda^k U^{-1}}{k!} = U\left(\sum_k \frac{\Lambda^k}{k!}\right)U^{-1} = Ue^\Lambda U^{-1} $$

where

$$ e^\Lambda = \pmatrix{e^{\lambda_1} & 0 & \cdots & 0\\ 0 & e^{\lambda_2} & \cdots & 0 \\ 0 & 0 & \cdots & e^{\lambda_n} } $$

Now let's get back to your problem. As you correctly stated

$$ x(t) = e^{At}x(0) = U\pmatrix{e^{\lambda_1 t} & 0 & \cdots & 0\\ 0 & e^{\lambda_2t} & \cdots & 0 \\ 0 & 0 & \cdots & e^{\lambda_n t} }U^{-1}x(0) $$

from this last expression you can see that if you want the solution to go zero for every initial condition, then the term $e^{\lambda_1 t}, e^{\lambda_2 t}, \cdots$ must go to zero for large values of $t$, and that can only happen if the real part of all eigenvalues $\lambda_1,\cdots$ is negative

caverac
  • 19,783
  • Thank you! But what if the matrix $A$ is not diagonalizable? – Leonardo Feb 27 '24 at 14:06
  • @Leonardo That's a good question. In the context of your problem, I think it is safe to assume that the matrix should be diagonalizable. I don't think there's cookie-cutter solution for non-diagonalizable cases, it depends on the actual ODE and what other properties you can infer beyond its diagonal form – caverac Feb 27 '24 at 14:25
  • 2
    Negative real parts of eigenvalues is sufficieint to show linear stability even for the non-diagonal case. Instead of pure exponentials, you will get exponentials with some polynomials in front, which still go to zero. This can be seen by exponentiating the Jordan Normal Form https://math.stackexchange.com/questions/1451276/matrix-exponential-for-jordan-canonical-form – whpowell96 Feb 27 '24 at 16:32