3

Consider the differential equation

$$ \frac{\text{d} y}{\text{d} t} = \left(c_0 + c_1 t^{-1} + c_2 t^{-2} + \dots c_n t^{-n} \right) y ;\quad y(1) = y_0 > 0; t \geq 1. $$

where $c_i > 0$ for each $i$. This can be solved directly and we find, for some $\alpha \neq 0$, that $$ y(t) \sim \alpha t^{c_1} e^{c_0 t}. $$ where $f(t) \sim g(t)$ if $\lim_{t \to \infty} y(t)/g(t) = 1$.

I'm interested in the generalization of this result to the case of a system of differential equations. Basically, I think something like the following is probably true or almost true:

Prop. 1. Let $A_0$ be a $d \times d$ non-negative irreducible matrix with Perron-Frobenius eigenvalue $\lambda$, and let $A_1, A_2 \dots A_n$ be nonnegative $d \times d$ matrices. Then, the system $$ \frac{\text{d} \mathbf{y}}{\text{d} t} = \left(A_0 + A_1 t^{-1} + A_2 t^{-2} + \dots A_n t^{-n} \right) \mathbf{y} ;\quad \mathbf{y}(1) = \mathbf{y}_0 > \mathbf{0}; t \geq 1. $$ has, for some $\alpha_i \neq 0$ and some $\beta_i$, $$ |\mathbf{y}_i(t)| \sim \alpha_i t^{\beta_i} e^{\lambda t} \quad 1 \leq i \leq d. $$

Question: I'm looking for a reference which shows that Prop. 1 is true, or true provided with some other technical condition. It seems similar to Theorem 19.1 of Asymptotic Expansions for Ordinary Differential Equations by Wasow but I don't fully see the connection.

enter image description here

Mr. G
  • 1,088

1 Answers1

1

Let's find directly the asymptotic result of Prop.1.

Diagonalize the matrix $A_0$, with $D$ the diagonal matrix having diagonal elements equal to $(\lambda_i)$ for $i=1,...,d$ and $U$ the matrix of eigenvectors. $$A_0 = U^{-1}DU$$ The system of equation is then equivalent to $$\begin{align} &\Longleftrightarrow \frac{d }{d t}\mathbf{y} = \left(U^{-1}DU + A_1 t^{-1} + A_2 t^{-2} + \dots A_n t^{-n} \right) \mathbf{y}\\ &\Longleftrightarrow \frac{d }{d t}(U\mathbf{y}) = \left(DU + UA_1 t^{-1} + UA_2 t^{-2} + \dots UA_n t^{-n} \right) \mathbf{y} \\ &\Longleftrightarrow \frac{d }{d t}(U\mathbf{y}) = \left(D + UA_1U^{-1} t^{-1} + UA_2U^{-1} t^{-2} + \dots UA_nU^{-1} t^{-n} \right) (U\mathbf{y}) \tag{1}\\ \end{align}$$

Let's denote $\mathbf{z} = U\mathbf{y}$, then

$$(1) \Longleftrightarrow \frac{d }{d t}\mathbf{z} = \left(D + UA_1U^{-1} t^{-1} + UA_2U^{-1} t^{-2} + \dots UA_nU^{-1} t^{-n} \right) \mathbf{z} \tag{2}$$

The $i^{\text{th}}$ equation of $(2)$, for $i=1,...,d$ has the following form $$\frac{d z_i }{d t}(t) = \lambda_i z_i(t)+ t^{-1}\sum_{j=1}^d (\alpha_{1j}z_{j}(t)) + \color{red}{t^{-2}}\sum_{j=1}^d (\alpha_{2j}z_{j}(t)) + \dots + \color{red}{t^{-n}}\sum_{j=1}^d (\alpha_{nj}z_{i}(t)) \tag{3} $$

For $t$ large ($t\to +\infty$), the terms in red color become small. By the same argument as for the unidimensional case, we can ignore these terms, and then, for $i=1,...,d$, we have

$$(3) \implies \frac{d z_i }{d t}(t) \sim \lambda_i z_i(t) + \frac{1}{t}\sum_{j=1}^d (\alpha_{1j}z_{j}(t)) \qquad \forall i=1,...,d \tag{4}$$

Let's solve the system of equations $(4)$, we have $$\begin{align} (4) &\Longleftrightarrow \left(e^{-\lambda_i t} z_i(t) \right)' = \frac{1}{t}\sum_{j=1}^d \left(\alpha_{ij} e^{-\lambda_i t}z_{j}(t) \right) \qquad \forall i=1,...,d \end{align}$$

For the sake of simplicity, let's denote $p_i(t) = e^{-\lambda_i t} z_i(t)$, and $ \mathbf{p} =\left(p_1(t),...,p_d(t) \right)$. The matrix form of the relation between $p_i(t)$ and $z_i(t)$ is $$\mathbf{p} = \Gamma^{-1}\mathbf{z}$$ with $\Gamma$ - a diagonal matrix having elements $e^{\lambda_i t}$ for $i=1,...,d$

And the matrix form of the new system of equation is $$\frac{d}{dt} \mathbf{p} = \frac{1}{t} UA_1U^{-1} \mathbf{p} \tag{5}$$

Diagonalize the matrix $A_1$ with $H$ the diagonal matrix having diagonal elements $\beta_1,...,\beta_d$ and $V$ the matrix of eigenvectors.

$$A_1 = V^{-1}HV$$

then $$\begin{align} (5) &\Longleftrightarrow \frac{d}{dt}\mathbf{p} = \frac{1}{t} UV^{-1}HVU^{-1} \mathbf{p} \\ &\Longleftrightarrow \frac{d}{dt}(VU^{-1}\mathbf{p} )= \frac{1}{t} H (VU^{-1} \mathbf{p}) \tag{6}\\ \end{align}$$

$(6)$ can be solved easily. Indeed, each equation of $(6)$ has the following form $$\frac{d}{dt}q_i(t) = \frac{\beta_i}{t}q_i(t) \Longleftrightarrow q_i(t) = c_it^{\beta_i} \qquad \forall i=1,...,d $$

Then $$\begin{align} (6)&\Longleftrightarrow VU^{-1}\mathbf{p} = \begin{pmatrix} c_1t^{\beta_1} \\ c_2t^{\beta_2} \\ \cdot \cdot \cdot\\ c_dt^{\beta_d} \\ \end{pmatrix} \\ &\Longleftrightarrow VU^{-1}\Gamma^{-1}\mathbf{z} = \begin{pmatrix} c_1t^{\beta_1} \\ c_2t^{\beta_2} \\ \cdot \cdot \cdot\\ c_dt^{\beta_d} \\ \end{pmatrix} \\ &\Longleftrightarrow \Gamma^{-1}VU^{-1}\mathbf{z} = \begin{pmatrix} c_1t^{\beta_1} \\ c_2t^{\beta_2} \\ \cdot \cdot \cdot\\ c_dt^{\beta_d} \\ \end{pmatrix} \qquad \text{as } \Gamma \text{ is diagonal matrix}\\ &\Longleftrightarrow VU^{-1}U\mathbf{y} = \Gamma\begin{pmatrix} c_1t^{\beta_1} \\ c_2t^{\beta_2} \\ \cdot \cdot \cdot\\ c_dt^{\beta_d} \\ \end{pmatrix} \\ &\Longleftrightarrow \color{red}{\mathbf{y} = V^{-1} \begin{pmatrix} c_1t^{\beta_1} e^{\lambda_1 t}\\ c_2t^{\beta_2}e^{\lambda_2 t} \\ \cdot \cdot \cdot\\ c_dt^{\beta_d}e^{\lambda_d t} \\ \end{pmatrix}} \tag{7} \\ \end{align}$$ with $V$ the matrix of eigenvectors defined by $A_1 = VHV^{-1}$.

$(7)$ is the more accurate solution than the one in your Prop.1. If we seek only the most major term for each $y_i(t)$, then we can do as follows

$$\color{red}{y_i(t) = \sum_{j=1}^d v_{ij}c_jt^{\beta_j}e^{\lambda_j t}} \sim v_{i\color{blue}{g}(i)}c_{\color{blue}{g}(i)} t^{\beta_{\color{blue}{g}(i)}}e^{\lambda{_\color{blue}{g}(i)} t} \qquad \forall i=1,...,d $$

with $\color{blue}{g}(i) =\underset{j=1,...,d}{\operatorname{argmax}} \{\lambda_j | v_{ij}c_j \ne 0\}$.

Remark: if all the coefficients $v_{ij}, c_j$ for $1\le i,j \le d$ are not $0$, then for all $i=1,...,d$, we have $\color{blue}{g}(i) = \color{blue}{g} =\underset{j=1,...,d}{\operatorname{argmax}} \{\lambda_j \}$ be the index $j^{th}$ such that $\lambda_j=\color{purple}{\lambda}$ (the P.E eigenvalue), and so we have $$y_i(t) \sim v_{i\color{blue}{g}}c_\color{blue}{g} t^{\beta_\color{blue}{g}}e^{\color{purple}{\lambda} t} \qquad \forall i=1,...,d $$

NN2
  • 20,162
  • Thanks for your answer, but how would all of the diagonal elements of $D$ be equal to $\lambda$? Shouldn't only one of them be equal to $\lambda$ since the P.F. eigenvalue is unique? – Mr. G Nov 04 '22 at 22:40
  • @Mr.G I suppose that all the eigenvalues are equal to $\lambda$ such that we have the results in your Prop.1. However, if we relax this assumption and suppose that the eigenvalues may be different, we still have closed form solution (I can change the solution) and the result is not the one that you expected in Prop.1 – NN2 Nov 04 '22 at 22:59
  • @Mr.G It's done. The answer is for general form of matrix $A_0$ with different eigenvalues. The information of Perron-Frobenius eigenvalue $\lambda$ ( $= \max_{i=1,...,d}\lambda_i$) is not necessary (as we know the matrix $A_0$). – NN2 Nov 04 '22 at 23:40
  • If $g$ is anything other than the index corresponding to the P.F. eigenvalue then $\lim_{t \to \infty} \frac{\color{red}{y_i(t)}}{v_{ig} c_{g} t^{\beta_g} e^{\lambda_g t}}$ will either be $0$ or $\infty$ though won't it? – Mr. G Nov 05 '22 at 15:50
  • Well, it might be something else if $\lambda_g$ is complex, but the point is that the term with the factor $e^{\lambda t}$ where $\lambda$ is the P.F. should always dominate? – Mr. G Nov 05 '22 at 16:03
  • @Mr.G I just realized that there is a small typo at the last result. And after changing it, we have the expected result (if $v_{ij}c_j \ne 0$ (for $1\le i,j\le d$)): the dominating term $e^{\lambda t}$ appears in all $y_i(t)$ – NN2 Nov 05 '22 at 16:53
  • 1
    Perfect, thanks so much! – Mr. G Nov 05 '22 at 20:22
  • @Mr.G You're welcome! – NN2 Nov 05 '22 at 20:28