15

MathWorld states:

"In general, an $n^\text{th}$-order ODE has $n$ linearly independent solutions".

Are they referring to linear ODEs? I only know why it should be true for ODEs with constant coefficients, by the following observations:

The solutions to the differential equation $a_0f+\dots +a_nf^{(n)}=0$, where $a_n\ne 0$ form a vector space $V$ (check).

Let $f\in C^n(\mathbb{R})$ s.t. $a_0f+\dots +a_nf^{(n)}=0$, where $a_n\ne 0$.

Let $\vec{a}=(a_0,a_1,\dots,a_{n-1})$ and $\vec{f}=(f,f^{(1)},\dots,f^{(n-1)})$.

$f^{(n)}=-a_n^{-1}(a_0f+\dots +a_{n-1}f^{(n-1)})=-a_n^{-1}\vec{a}\cdot\vec{f}$ is differentiable, and the $m^\text{th}$ derivative of $\vec{b}\cdot\vec{f}$ is:

$$\vec b\left(\matrix{\vec{e_2}\\\vdots\\\vec{e_n}\\-a_n^{-1}\vec{a}}\right)^m\cdot \vec{f}$$

Hence $f$ is infinitely differentiable. Moreover, the coefficients above are bounded above by an exponential in $m$. For any closed interval $[-d,d]$, $\vec{f}$ is continuous and therefore bounded. This means that the Taylor series for $f$ converges to $f$ in the interval by Taylor's theorem for the expansion about $x=0$ (using the Lagrange form of the remainder on the whole interval). Hence the Taylor series for $f$ about $x=0$ converges to $f$ for all $\mathbb{R}$, i.e. $f$ is analytic.

Now consider the linear transformation $L:V\to\mathbb{R}^n,f\mapsto (f(0),f^{(1)}(0),\dots,f^{(n-1)}(0))$. To prove surjectivity, use the differential equation to produce a Taylor series and show that it is a solution. Injectivity is proven by the below:

If $L(f)=L(g)$ for some solutions $f,g$, then $\forall k=0,1,\dots,n-1, f^{(k)}(0)=g^{(k)}(0)$, and by the differential equation, this also holds for all $k\in\mathbb{N}$. $f$ and $g$ are analytic and since the Taylor series is unique, $f=g$.

Hence, $V$ has dimension $n$.

Is my proof correct?

Is the theorem for general linear ODEs true, and how do I prove it?

user1537366
  • 2,136
  • 1
    The function $e^{-\frac{1}{x^2}}$ is bounded in a neighborhood of zero, continuous and even $C^\infty$ there and the Taylor coefficients at zero are bounded by an exponential (they are, in fact, zero). The Taylor series of that function therefore converges perfectly well to the zero function, and of course does not equal the function. I do not think you can avoid using the existence and uniqueness theorem for linear $n$-th order equations even in this simple case; it is that theorem which gives the precise dimension of the solution space. – guest Jan 03 '15 at 05:46
  • @guest I proved that the remainder term in Taylor's theorem approaches 0 as the number of terms goes to infinity... because it's some exponential over a factorial. It's similar to proving that the series for $e^x$ works. – user1537366 Jan 03 '15 at 05:49
  • 1
    Oh, sorry! I did not see that you bounded the remainder for an entire interval! You are absolutely right, but you need to make clear that you are bounding the remainder in an entire interval around your centerpoint. – guest Jan 03 '15 at 06:02
  • Alright, I've edited it to make it clearer. Thanks. – user1537366 Jan 03 '15 at 06:11
  • 2
    Great. Now concerning the surjectivity and injectivity statements. Note, that a priori, just because two Taylor series agree on their first $n$ terms does not mean they agree for all; you need to remind the reader that once you have the first $n$ terms, the rest are determined by the recursive formula you found for $f^{(n)}$ and that settles injectivity. But for surjectivity: how do you know that the $n$-tuples you are producing for various functions in $V$ are linearly independent? – guest Jan 03 '15 at 06:13
  • I don't need to show that for surjectivity. I just need to show that for every vector in $\mathbb{R}^n$, there is a solution to the differential equation. After we get the isomorphism, which shows that the vector spaces are "equivalent", we know the dimension immediately. Of course we need to check that the transformation is indeed a linear transformation, which I thought was clear. – user1537366 Jan 03 '15 at 06:18
  • Right, but where is that solution? If I give you the vector $(1,2,3)$ for an order $3$ system, how does your argument produce the solution? Remember that your proof starts with "let $f\in C^n$ that satisfies $a_0+\cdots+a_n f^{(n)}$...". Under this existence assumption you correctly prove real analyticity, but this existence of a $C^n$ solution in the first place still needs to be proven. – guest Jan 03 '15 at 06:22
  • Yes, for surjectivity, my intention was to construct the power series from the coefficients one-by-one using the recursive relation, and show that the resulting power series converges, and that it is a solution to the differential equation, but that turns out to be similar in nature as the previous arguments. That's why I left it as a brief description there. – user1537366 Jan 03 '15 at 06:34
  • 2
    Great, that's it. For any given $n$-tuple you prescribe Taylor coefficents and then fill in the rest by recursion. Then proceed as before to get convergence and you get enough functions in the first place. But you definitely need to fill that in. – guest Jan 03 '15 at 06:36
  • Yeah, that's if it was a real proof I had to write out. I'm doing this for fun =); I'm not taking any ODE course anyway. So perhaps I should remove my request for proof verification...... but I did want to check if my idea was right. – user1537366 Jan 03 '15 at 06:39

1 Answers1

9

Your proof is now good. For the homogeneous non-constant coefficient system $$a_n(t)x^{(n)}+\cdots +a_0(t)x=0$$ the solutions again form a vector space, there is nothing different in this respect. How to find its dimension (in particular, prove that the dimension is not zero)?

One cannonball way to proceed is to rewrite it as a first order vector valued equation by introducing variables $x_1=x',\cdots$. This furnishes the first order equation $$X'(t)=A(t)X(t).$$ Here the function $A$ is assumed nicely behaved from some interval $(a,b)$ to the Banach space $\mathbb{R}^n$, e.g. take the entries to be Lipschitz and $a_n(t)\neq 0$ on $(a,b)$ to avoid the degenerate locus (in fact, assume $a_i/a_n(t)$ are also Lipschitz on the interval). Then we can form any IVP we want $$X'=A(t)X(t),\quad X(0)=X_0$$ and the Picard-Lindelof theorem furnishes a unique solution with the initial condition $X_0$ (if one tries to simplify the proof of P-L for linear systems one still ends up having to perform the crucial approximation process and prove the uniform convergence, which is the big deal in P-L itself).

But now the fact that you can find solutions corresponding to $n$ linearly independent choices of $X_0$ implies that the vector space of solutions is exactly $n$-dimensional by the theorem on the Wronskian (the Wronskian of $n$ solutions is either identically zero or never zero; if the initial vectors $X_0$ are linearly independent, the Wronskian is nonzero somewhere, so it is never zero on $(a,b)$ and thus the solutions are independent).

guest
  • 4,822
  • 3
    Just important to note that we can’t make a similar statement for a nonlinear ODE. – Atom Nov 30 '19 at 17:22
  • @Atom yes, so MathWorld should be corrected

    (even though it is talking about linear ODEs in the preceding paragraphs, it's still incorrect because it doesn't state that the context is being fixed to linear ODES)

    – Joe Stephen Jan 17 '22 at 18:41