For a scalar ODE of the form $$\dot{x}(t) + f\left(x(t)\right) = 0 \label{1}\tag{1}$$ where $f \colon \mathbb R \to \mathbb R$ is some smooth function admitting a unique root $x^*$ such that $f(x^*) = 0$. The linearization of \eqref{1} is straightforward as we can insert $$ x(t) = x^* + \varepsilon g(t) \label{2}\tag{2}$$ into \eqref{1}, where $0 < |\varepsilon| \ll 1$, to see that the equation for $g$ (neglecting $\mathcal{O}(\varepsilon^2)$ terms) will be $$ \dot{g}(t) + f'(x^*) g(t) = 0 \label{3} \tag{3}.$$ However, suppose that we want to perturb the ODE \eqref{1} by some external signal modelled by another time-dependent function $\sigma(t)$ (whose precise expression is not available) such that $\sigma(t) \to 0$ as $ t \to \infty$ (we can also impose the condition that $|\sigma(t)|$ is bounded by some exponentially decaying function $\mathrm{e}^{-\lambda t}$), i.e., we consider the perturbed ODE $$\dot{x}(t) + f\left(x(t)\right) + \sigma(t) = 0 \label{4}\tag{4}$$ such that $x^*$ remains to be a long-time equilibrium state. May I know how can we can "linearize" the equation \eqref{4}? Apparently, employing the ansatz \eqref{2} will not give us a equation for $g$ at the order of $\varepsilon$...
-
Since $\sigma$ doesnt depend on $x$ the variational ODE remains the same (almost by definition). But in this case you should be linearizing about an actual solution of the original ODE, which for nontrivial $\sigma$, cannot be a constant $x^$ (I guess you could linearize about the constant $x^$, but then you’d have to interpret the solution of the variational equation as telling you information in an asymptotic sense). – peek-a-boo Aug 08 '23 at 20:24
-
@peek-a-boo can you elaborate your comment into a complete answer? I am not sure why you mention the term "variational ODE" – Fei Cao Aug 08 '23 at 22:32
-
variational ODE is the name for your “linearized ODE” (3). More properly, (3) is called the (linear) variational ODE associated to (1) along the (constant) solution $x^$*. – peek-a-boo Aug 08 '23 at 23:10
-
I don’t know how much help it’s going to be but here is a PhySE answer of mine about linearizing. There I happened to discuss the autonomous case, but the discussion extends almost verbatim to the non-autonomous case (simply replace the full Frechet derivative $D$ by the derivative in the spatial variables only). – peek-a-boo Aug 08 '23 at 23:49
-
@peek-a-boo Thank you for your comment, although I am not sure whether that is super-related to what I am trying to ask. Regarding your very first comment, I am trying to linearize about a long-time equilibrium of the ODE (4), which contains at least the point/state $x^*$. – Fei Cao Aug 09 '23 at 02:26
-
Are you aware of the methods used in Perturbation Theory? – Joako Aug 10 '23 at 04:52
-
@Joako Yes. But I am not sure how it can be applied to the current settings... – Fei Cao Aug 10 '23 at 16:07
-
This looks like a boundary value problem. There is a boundary at $t^=\ln(1/\epsilon)/\lambda$. For $t\gg t^$, your equation (3) holds up to order $\epsilon$. But for $t<t^*$, you really need to know what $\sigma(t)$ looks like. – stochastic Aug 11 '23 at 12:52
-
@stochastic Can you elaborate your comment into a more formal answer? Thank you! – Fei Cao Aug 11 '23 at 14:50
-
@Derivative The blow up problem you mentioned is a separate question (related to stability of the long-time equilibrium). Here I am specifically interested in how to "linearize" the equation (for instance the example you wrote down)... – Fei Cao Aug 11 '23 at 17:26
-
@Derivative I still didn't get your point. For the case $f(x) = x^2$, linearizing around $x^* = 0$ gives $\dot{g}(t) = 0$ (see equation (3) in the post) which means that $g$ is a constant so it is not converging to zero. As a result, $x^* = 0$ is not an attracting point for the nonlinear ODE. – Fei Cao Aug 12 '23 at 15:23
-
Consider a variable change to $u(t) = x(t) + \int_0^t\sigma(\tau)d\tau$ – Cesareo Aug 13 '23 at 14:16
-
@Cesareo Then the ODE (4) reads as $\dot{u}(t) + f(u(t) - \int_0^t \sigma(\tau),\mathrm{d} \tau) = 0$ and still, I do not know how it helps... – Fei Cao Aug 13 '23 at 14:56
-
Conceptually. How different is your linearization around $x^$ from $\dot x + f(x^) + f'(x^)(x-x^) + \sigma(t) = 0$? – Cesareo Aug 14 '23 at 16:15
-
@Cesareo I am not sure whether I understood your question.... – Fei Cao Aug 14 '23 at 17:53
-
Why not consider in the variation $\dot x(t) + f(x(t))+\epsilon \sigma(t)=0$ instead? – Cesareo Aug 14 '23 at 20:28
-
@Cesareo Typically, the $\varepsilon$ is introduced only at the level of the linearized level (which measures the strength of the perturbation of $x(t)$ around $x^*$) and is not showing up in the original (nonlinear) equation at the beginning (before we carry out linearization)... – Fei Cao Aug 14 '23 at 22:50
1 Answers
This is not an answer, but an elaboration on my comment as requested by the OP.
The linearized version of this equation is given by $$ \dot g(t) + f'(x^*)\,g(t) + \frac1\epsilon\sigma(t) = 0. $$ This equation holds as long as $g(t)$ is $\mathcal{O}(1)$, which is a problem because of the $1/\epsilon$ term in the equation. Let us write $\sigma(t) = e^{-\lambda t}\mu(t)$ for some $\mu(t)$ with $|\mu(t)|<1$. We can rewrite the above equation as $$ \dot g(t) + f'(x^*)\,g(t) + \frac{e^{-\lambda t}}{\epsilon}\mu(t) = 0. $$ There is a time $t^* = \ln(1/\epsilon)/\lambda$ after which the prefactor $e^{-\lambda t}/\epsilon$ becomes smaller than one. For $t>t^*$, this equation provides a perturbative solution to the original equation up to $\mathcal{O}(\epsilon)$.
There is no guarantee that a perturbative solution exists for an early time. Let me try to explain what could go wrong. Near $x^*$, the term $f(x(t))$ is small in the unperturbed equation. If the $\sigma(t)$ term is not equally small, the equation would be approximately $\dot x(t) = -\sigma(t)$. Now imagine if $\sigma(t)$ does not decay to zero near $t=0$. In this case, even if we start near $x^*$, $x(t)$ grows away from $x^*$ at a macroscopic rate, and therefore, no perturbative solution around $x^*$ could exist.
- 2,688