2

In my ODE course at school, we were introduced to various techniques for solving differential equations, but without any proofs or explanations—just formulas and methods to memorize. I’m struggling to understand the reasoning behind one technique in particular: the "D operator" method.

Here’s what I know about this method:

  • Suppose we have a linear differential equation with constant coefficients, in the form the form $y''+ay' +b=P(x)$ (it could be of any order really ) where P(x) is a polynomial (also works for other functions). The D operator method involves rewriting the equation as follows:

$$y = \frac{P(x)}{D^2 + aD + b}$$

then to find the particular solution we set $y =\frac{P(x)}{D^2+aD+b}$ then expand $\frac{1}{D^2+aD+b}$ as a series and then differentiate. For example $1-y''= x^6 $ we have that $y =\frac{x^6 }{1-D^2}=(x^6 )(1+D^2 +D^4+D^6+D^8+....) =x^6+30x^4+360x^2+720$

  • It also works for exponential function $e^{ax}$, Assume that we want to solve $ay''+by'+cy=e^{ax}$ we will find the particular solution by substituting $D=a$ and this will be the the particular solution unless the denominator is $0$ then we multiply the numerator with $x$ and differentiate the denominator For example $$y''-2y'-3y=4e^{5x}$$ $$y_p= \frac{4e^{5x}}{D^2-2D-3} = \frac{4e^{5x}}{5^2-2*5-3}=\frac{e^{5x}}{3}$$ However here I have no idea what is going on and why do we differentiate and multiply by $x$? and how we even differentiate a differential operator? for example $$y''-2y'-3y=4e^{3x}$$ $$y_p= \frac{4e^{3x}}{D^2-2D-3}=\frac{4e^{3x}}{3^2-2*3-3}\text{we had differentiate }$$ $$y_p= \frac{4xe^{3x}}{2D-2}=xe^{3x}$$
  • It also works for $\sin(ax), \cos(ax)$ the difference that is $D^2=-a$, Also it works for $\sinh(ax), \cosh(ax)$ but $D^2=a^2$

My concern is how to rigorously prove all of these steps or how to justify it.

  1. What exactly is an operator? How does it work in the context of differential equations? How to rigorously define it?
  2. How can we justify expanding $\frac{1}{1-D}$ as a geometric series, I know how to prove that it converges for $|z|<1 , z\in \mathbb{C}$ and why it works for complex but why does it work with $D$?
  3. How is the fraction $\frac{1}{1-D}$ defined with an operator like $D$? In particular, what does this fraction mean, and how do we interpret expressions like $D^2$ (I know it is the second derivative but how did the expansion in the geometric series turns the nth power to the nth derivative? )
  4. How does multiplying the series by the function lead to derivatives?
  5. In case of $f(x)= e^{ax}, \sin(ax), $ etc, how to rigorously prove all its steps ?
  6. What does it mean to differentiate $D$? Why do we do it? Why do we multiply with $x$ ?
  7. Is there a field or course that studies these concepts rigorously? I’ve studied real analysis mainly form Rudin's books and haven’t found any mention of this operator approach or how to justify or prove any of these steps. Would functional analysis cover this, or is there another area of mathematics that treats operators in this way?
pie
  • 8,483
  • 1
    It seems to me like this can be analysed using (finite dimensional) linear algebra, choosing as a vector space $V$ the space of polynomials of degree $\leq$ the degree of $P$ and then computing the inverse of $D^2 + a D +b$ (as a linear map without the bogus fraction) which will lead to a similar "geometric series" expansion. Here the derivative is simply a linear map $D: V\to V$. – jd27 Nov 11 '24 at 19:15
  • 1
    (mostly links to things of possible help, not really an answer to your questions) Possibly useful are Geometric series of an operator and maybe this MSE answer. However, if you're learning basic methods for solving ordinary differential equations, then I recommend for now just focusing on learning the "algebraic manipulative aspects" for applying the method, and deal with the theoretical issues later (continued) – Dave L. Renfro Nov 11 '24 at 19:51
  • 1
    (later introductory applied treatment -- Operational Mathematics by Ruel V. Churchill; later introductory pure treatment -- Introductory Functional Analysis with Applications by Erwin Kreyszig). For now the discussions of operators in Ordinary Differential Equations by Gabriel Nagy seems especially nice -- search this .pdf document for "operator" to find these discussions of operators. – Dave L. Renfro Nov 11 '24 at 19:51
  • 1
    Earlier I intended to mention this MSE answser, but forgot after looking up the other stuff I mentioned. – Dave L. Renfro Nov 11 '24 at 20:29
  • $(1-D)\sum_{k=0}^n D^k=1-D^{n+1}$ If $D^{n+1}\rightarrow 0$, you find $(1-D)\sum_{k=0}^\infty D^k=1$, i.e. $\frac{1}{1-D}=\sum_{k=0}^\infty D^k$
  • – user408858 Nov 12 '24 at 01:16
  • @user408858: I suspect the OP knows how to obtain, and prove convergence of, geometric series. I believe the actual concern here (or one of them, to be more accurate) is what does $D^{n+1}\rightarrow 0$ mean when $D$ is the operation of taking the derivative of a function. (moments later) However, in looking more carefully at the questions, I guess the first part of #2 appears to say that obtaining and proving geometric series expansions is one of the concerns. If that's one of the concerns, then I think it's way too early for the OP to be worrying about ODEs. – Dave L. Renfro Nov 12 '24 at 11:12
  • @DaveL.Renfro To rephrase 2 more clearly: How can we justify turning $\frac{1}{1-D}$ as a geometric series, I know how to prove that it converges for $|z|<1 , z\in \mathbb{C}$ but why does it work with $D$? – pie Nov 12 '24 at 11:53
  • @DaveL.Renfro My guess is, it takes it to zero. Depends on the function space you are looking at, but if you differentiate often enough, you find the derivative to be equal to zero? This is how I would have interpreted $\sum_{k=0}^\infty D^k$ or $D^{n+1}$ for very large $n$'s intuitively. – user408858 Nov 12 '24 at 23:39