10

Let $k$ and $a$ be positive constants, and $y_a$ a non-negative constant. Consider the following ordinary differential equation $$(1+[y'(x)]^2)\cdot y(x) = k, \qquad x \in (0, a),$$ over such functions $y\in Y$ where $$Y = \left\{\,y\in \mathrm{C}[0, a] \cap \mathrm{C}^1(0, a) \mid y \geq 0, y(0) = 0, y(a) = y_a \,\right\}.$$

Does this equation have at most one solution?

It is the differential equation from the brachistochrone problem, positive $y$-axis pointing down. A cycloid (with certain parameter range and radius) is a solution, but the question is about uniqueness.

The usual Peano-Cauchy-Picard uniqueness theorem does not apply as the ODE is not in the standard form $y'(x) = f(x, y(x))$. If we took it into such form using the square root, uniqueness would not be apparent to me, since we could have positive and negative square roots and sign could be switched (and is, in fact, switched once, for some "correct" cycloid solutions as well).

Non-negativity $y \geq 0$ does not seem to be of sufficient help. It rules out a negative derivative right at the start, for example, but I cannot see uniqueness. Even over the subset $$Y_1 := \left\{\, y \in Y \mid \int\limits_{0}^{a}{\sqrt{\frac{1+{[y'(x)]}^2}{y(x)}}\,\mathrm{d}x} < +\infty\,\right\}$$ it seems multiple solutions may exist. Here the (improper Riemann) integral finiteness is also a condition from the brachistochrone problem.

Are there really multiple solutions (over $Y$ or $Y_1$)?

  • If the solution is actually unique and I missed something, what would be an argument for uniqueness?
  • If there are multiple solutions, what additional constraints (subset of $Y_1$), "natural" to the brachistochrone problem, could be looked at to guarantee uniqueness?

As a solution $y \in Y$ is bounded above by $k$, there are no solutions if $y_a > k$, so at most one solution. Therefore, let us assume throughout that $y_a \leq k$.

  • I just deleted my answer; I thought we could construct counter-examples by inserting constant plateau in a brachistochrone, but this does not work out (we get "unintended" solutions, but they don't have the right value of $k$ or don't end up at the right coordinates). I believe the solution is in fact unique but I don't see how to prove it. One take-away is that if you fix $y_a$ and let $a \to \infty$ you seem to get cases where the unique solution has a long constant plateau. – Martin Dec 17 '24 at 19:56
  • By applying the change of variable $z(x)=1/y(x)$ I get a Lipzchitz ODE so I believe uniqueness it is hold (I added the comment to maybe give some ideas of how to analyzing it) – Joako Dec 17 '24 at 23:11
  • @Martin I have now posted a second attempt at the answer. – Linear Christmas Dec 18 '24 at 13:57

4 Answers4

3

The solution below actually assumes a particular $k$ (see comments) which is determined by $a$ and $y_a$ for a cycloid to exist. There may be other problems with uniqueness as well, as a modification that I had in mind fails (see comments).

I posted a new answer but keeping this answer temporarily as it has been referenced in other answers.

$(1+[y'(x)]^2)\cdot y(x) = k$, $x \in (0, a)$, is unique over $Y$**

Let $y \in Y$ be a solution. By dividing with $1 + y'^2$, we see that $y$ is bounded above by $k$ in $(0, a)$ and via continuity also on $[0, a]$. So assume $y_a \leq k$. Furthermore, any stationary point $s \in (0, a)$ with $y'(s) = 0$ is a maximum, since then $y(s) = k$. Notice also that because $y(x) \to 0$ as $x \to 0^+$, it must be that $y'(x) \to +\infty$ in the same process. By continuity of $y'$, the derivative starts out strictly positive. Finally, $y(x) > 0$ when $x \in (0, a)$ as $k > 0$ and $y \geq 0$.

There are two possible cases for $y$, depending on parameters $a$, $y_a$, $k$. Either there are no stationary points or there is at least one stationary point.

  1. No stationary points in $(0, a)$ exist

    Since $y'$ starts out positive, plus there are no stationary points and $y'$ is continuous over $(0, a)$, we have that $$y'(x) = \sqrt{\frac{k}{y(x)} - 1} > 0,\qquad x \in (0, a).$$ The standard uniqueness theorem applies (with modification that solution continuously extends to the boundary). Lipschitzness comes from the fact that for $f(x, y) := \sqrt{\frac{k}{y} - 1}$ we have $$\frac{\partial}{\partial y}f(x, y) = -\frac{k}{2y^2} \frac{1}{\sqrt{k/y-1}}, \qquad y \in (0, k),$$ which is defined and continuous.

  1. Exists at least one stationary point in $(0, a)$

    By continuity of $y$ and $y(0) = 0 \neq k$, there exists a smallest argmax of $y$ over $(0, a)$, denote it by $s_0$. Then there are no stationary points over $(0, s_0)$. Thus by the argument in case 1, over $[0, s_0]$ the solution must be unique. One solution is the cycloid $c$ for the original IVP, and so the $y|_{[0, s_0]}$ is a restriction of this cycloid, $c|_{[0, s_0]}$.

    If $y_a = k$, then $s_0 = a$ – otherwise $c(a) = y_a$ is impossible – and so we are done. So let $y_a < k$. By continuity, there is now also a largest argmax of $y$ over $(0, a)$, denote it by $s_\infty$. Again there are no stationary points over $(s_\infty, a)$, so by the familiar argument $y|_{[s_\infty, a]} = c|_{[s_\infty, a]}$. Since $c(s_\infty) = y(s_\infty) = k$, we must have $s_0 = s_\infty$. So $y|_{[0, s_0]} = c|_{[0, s_0]}$ and $y|_{[s_0, a]} = c|_{[s_0, a]}$, therefore $y = c$.

This completes the proof. $\blacksquare$

  • Unfortunately I don't think this is correct, as it seems that you conclude the only possible solutions are portions of an actual cycloids, whereas it looks to me that I can choose parameters such that a cycloid+constant plateau is a (unique, I think) solution. I think part 1 of the proof is perfectly clear. The first paragraph of part 2 is also convincing. You lose me when you argue that $s_0 = a$ (I think we can have $s_0 < a$ even when $y_a = k$, precisely by inserting a constant plateau). Also later, I similarly disagree with the argument showing $s_0 = s_{\infty}$. – Martin Dec 19 '24 at 09:25
  • Actually now I am not sure about part 1. Cauchy-Lipschitz does not apply, and I don't understand exactly what "modifications" make it apply here. – Martin Dec 19 '24 at 10:55
  • @Martin You are right again, it seems I actually assumed things about $k$. In my proof $k = \frac{2a}{\tilde{\theta} - \sin{\tilde{\theta}}}$ where $\tilde{\theta} = \beta^{-1}(\frac{a}{y_a})$ and where $\beta(\theta) = \frac{\theta - \sin{\theta}}{1-\cos{\theta}}$, $\theta \in (0, 2\pi)$, $\beta(2\pi)=+\infty$. If $k$ is something other than this, there is no cycloid solution to begin with... A "save" would be if there is a solution $c$ which has at most one stationary point like the cycloid. But if you are right about constant plateau, then this is false. Thanks for your continued help! – Linear Christmas Dec 19 '24 at 11:07
  • No problem. I find this question fun to think about! – Martin Dec 19 '24 at 11:09
  • @Martin About your second comment, the argument I had in mind was as follows. Assume there is a solution $u \in Y$. Pick $x_0 \in (0, a)$. Then a modified IVP with the same differential equation but with $y(x_0) = u(x_0)$ will have a unique solution in some small neighbourhood $(x_0-h, x_0 + h)$ around $x_0$. But note that a restriction of $u$ is a solution, so here $y = u$. You can extend $y$ uniquely as close to $0$ and $a$ as you wish by looking at other IVPs (for $(x_0 - h, u(x_0-h))$, for example, and it will again agree with $u$. (1/2) – Linear Christmas Dec 19 '24 at 11:37
  • @Martin I thought that if we picked $x_0$ close enough to $0$, we could glue the solutions together for a unique maximal solution due to continuity at $0$, though I see now this might not immediately be the case... So you're trice right. (2/2) – Linear Christmas Dec 19 '24 at 11:52
  • @Martin Fix for the uniqueness when no stationary points. Let $u, v \in Y$ be two distinct solutions without stationary points. Then they can't intersect inside $(0, a)$. Otherwise if they intersected at $x_0$, then IVP $(x_0, u(x_0)) = (x_0, v(x_0))$ is indeed the same for them, and so a unique solution around $x_0$ which should extend uniquely to boundary. So they cannot intersect if they are different. So by the ODE, the derivatives can't intersect either which means one derivative is strictly larger than the other. Such functions intersect at most once, hence one boundary condition fails. – Linear Christmas Dec 19 '24 at 14:08
  • I think this is a nice argument. However, correct me if I'm wrong, it is not fully conclusive because it needs to assume that solutions that we don't yet know about, must have no stationary points, is that right? If I'm missing something, and your argument does work, then it should replace my longer argument in part 1 of the proof I just posted. – Martin Dec 19 '24 at 18:52
  • 1
    @Martin Yes, this is an argument to rule out two solutions without any stationary points. Two other cases are possible: two solutions which both have stationary points, and one solution with and one without stationary points. These would need further arguments. – Linear Christmas Dec 19 '24 at 19:12
  • @Martin I posted a third answer extending the above fix. If the third answer works, I will delete this second one. – Linear Christmas Dec 20 '24 at 02:01
  • nice answer .... +1 – TShiong Dec 20 '24 at 02:28
0

I think the proof given by OP is almost correct, and with suitable modifications, would give uniqueness, but without concluding that all solutions are portions of cycloids. Rather, it shows that solutions are portions of cycloids, possibly spliced with a constant plateau. Basically, this is because (in OP's notations) we can, in fact, have $s_0 < a$ when $y_a = k$, and $s_\infty > s_0$ when $y_a < k$. Then $y$ must be constant on $[s_0,a]$, or $[s_0,s_\infty]$, respectively, as shown by the following lemma:

Lemma : If $s,t$ are such that $0 < s < t \leq a$ and $y'(s) = y'(t) = 0$, then $y|_{[s,t]} = k$.

Proof: Since $y'(s) = y'(t) = 0$, we have $y(s) = y(t) = k$, and so $M := \max_{x \in [s,t]} y(x) = k$, since $y(x) \leq k$ for all $x \in [0,a]$. On the other hand let $m = \min_{x \in [s,t]} y(x)$, attained at some point $\xi \in [s,t]$ since $y$ is continuous on $[s,t]$. If $\xi = s$ or $\xi = t$, then $m = k$ and we are done. Otherwise, $\xi \in (s,t)$ and so $y'(\xi) = 0$. We conclude that $m = y(\xi) = k$ and we are done.

Uniqueness follows from this and part 1 of OP's proof Edit I have clarified this point in my second answer. The argument below is still too imprecise

  1. One determines uniquely the beginning portion via the Cauchy-Lipschitz theorem as in Part 1 of OP's answer. This gives a unique segment on $[0,s_0]$, where $s_0$ can be defined in terms of $a$, $y_a$ and $k$ (with possibly $s_0 = a$), and $y'(s_0) = 0$ if $s_0 < a$.
  2. If $s_0 < a$, by the same argument applied to $x \mapsto y(a - x)$, one can also determine uniquely the end segment on $[s_\infty,a]$.
  3. Finally, it follows from the above lemma that the solution is constant on $[s_0,s_\infty]$

Here is a set of parameters where the unique solution has this feature. Let $a = \frac{3\pi + 2}{4} + 1$, let $y_a = \frac12$, and let $k = 1$. This is set up so that the constant plateau has lenght $1$. We want to find $y:[0,a] \to \mathbb{R}_+$, continuous on $[0,a]$ and continuously differentiable on $(0,a)$, such that $$\left\{\begin{array}{rclll}(1 + y'(x)^2)y(x) &=& 1 && \text{for all } x \in (0,a) \\ y(0) &=& 0\\ y(a) &=& \frac12 \end{array}\right.$$ The unique solution coincides on $[0,\pi/2]$ and $[\pi/2 + 1,a]$, with the cycloid arc $x \mapsto c(x)$ defined by the parametric equations $$ c(x(t)) = y(t) \quad \text{where}\quad \left\{\begin{array}{rcl} x(t) &=& \frac{1}{2}(t - \sin(t))\\ y(t) &=& \frac{1}{2}(1 - \cos(t)) \end{array}\right. $$

Below is a plot of this solution : enter image description here

Martin
  • 595
  • I think the part involving the Cauchy-Lipschitz theorem is still a bit dodgy. I'll try to fix this later. – Martin Dec 19 '24 at 10:56
0

The following should work : it gives uniqueness in all cases, and existence under a simple condition. The unique solution often has a constant plateau as highlighted in my previous answer.

Statement

Let $a,k > 0$ and $y_a \in [0,k]$, and denote by $(P)$ the boundary value problem $$ (1 + y'^2) y = k, \quad y(0) = 0,\quad y(a) = y_a. $$

Let $\mathcal{B}_k$ be the brachistrochrone curve defined parametrically by $$ \mathcal{B}_k(x(t)) = y(t) \quad \text{where} \quad \left\{\begin{array}{rcl} x(t) &=& \frac{k}{2} \left(t - \sin(t)\right)\\ y(t)&=& \frac{k}{2} \left(1 - \cos(t)\right)\end{array} \right. $$ This curve satisfies the differential equation $$ (1 + \mathcal{B}_k'^2)\mathcal{B}_k = k $$ and $\mathcal{B}_k(0) = 0$. Let $s_0 := x(\pi) = \frac{k\pi}{2}$. Then

A) If $a \geq s_0$, there exists a unique solution to the problem $(P)$. This solution agrees with $\mathcal{B}_k$ on $[0,s_0]$. Moreover, there is $s_\infty \in (s_0,a]$ such that $y$ agrees with $x\mapsto \mathcal{B}_k(x + s_0 - s_\infty)$ on $[s_\infty,a]$ and $y$ is constant equal to $k$ on $[s_0,s_\infty]$.

B) If $a < s_0$, then a solution exists if and only if $y_a = \mathcal{B}_k(a)$, and in this case it is $\mathcal{B}_k$.

Remark The value of $s_\infty$ is given by $s_\infty = a - (s_0 - s^*)$, where $s^* \in [0,s_0]$ is the unique solution of
$$\mathcal{B}_k(s^*) = y_a.$$

Proof : The proof involves the following steps:

  1. Prove that for every solution $y$, there is $\varepsilon > 0$ such that $y$ agrees with $\mathcal{B}_k$ on $[0,\varepsilon]$. This is the trickiest part (or at least, I did not find anything smarter).
  2. Deduce via the Cauchy-Lipschitz theorem that all solutions coincide with $\mathcal{B}_k$ on $[0,\min(s_0,a)]$. If $a < s_0$, this gives statement B, and otherwise, we deduce that $y'(s_0) = 0$.
  3. Show that there exists $s_\infty \in (s_0,a]$ such that all solutions to $(P)$ agree on $[s_\infty,a]$ and satisfy $y'(s_\infty) = 0$.
  4. Using the Lemma in my previous answer, conclude that all solutions are constant equal to $k$ on the interval $[s_0,s_\infty]$.

$\bullet$ Step 1 We can use the physicist's trick which is nicely described at the bottom of this wikipedia page (in french, but I'll expand on it here) https://fr.wikipedia.org/wiki/Courbe_brachistochrone

First, observe that by continuity of $y$, there is $\varepsilon \in (0,a)$ such that $0 < y(x) \leq \frac{k}{2}$ for all $x \in (0,\varepsilon]$. This implies that $y'(x) \neq 0$ on $(0,\varepsilon]$, so it keeps a constant on this interval by continuity. So we have $$y'(x) = \sigma \sqrt{\frac{k - y}{y}}\quad \forall x \in (0,\varepsilon]$$ where $\sigma$ is either $1$ or $-1$. But $\sigma$ must of course be $1$, and this can be seen, e.g., by writing $$y'(x) y(x) = \sigma \sqrt{y(k-y)} \implies \frac{y^2(\varepsilon)}{2} = \sigma \int_{0}^{\varepsilon} \sqrt{y(x)(k-y(x))}dx\,.$$ Now that we have settled that $$y'(x) = \sqrt{\frac{k-y(x)}{y(x)}} \quad \forall x \in (0,\varepsilon],$$ we deduce that $y$ is strictly increasing on $[0,\varepsilon]$, and there exists a continuous, strictly increasing function $\theta:[0,\varepsilon] \to \mathbb{R}_{\geq 0}$ such that $y(x) = k\sin^2(\theta(x)/2)$. We deduce from the differential equation that $y' = \cot \frac{\theta}{2}$, and also by differentiating directly, that $y' = k\sin(\theta/2) \cos(\theta/2) \theta'$. Equating these two expressions of $y'$, we find that $\theta$ satisfies $$\theta'(x) = \frac{1}{k \sin^2 \frac{\theta(x)}{2}}, \quad \forall x \in (0,\varepsilon]$$ with $\theta(0) = 0$. The next part of this trick is to introduce the reciprocal function $\lambda : [0,\theta(\varepsilon)] \to [0,\varepsilon]$ defined by $\theta(\lambda(t)) = t$ (this is indeed possible by the properties shown above). It is found to satisfy the Cauchy problem $$\lambda'(t) = k\sin^2 \frac{t}{2}\,, \quad \lambda(0) = 0,$$ which gives $\lambda(t) = \frac{k}{2} \left(t - \sin t\right)$. It follows that $$y(\lambda(t)) =k \sin^2 \frac{t}{2} \quad t \in [0,\theta(\varepsilon)].$$ This is exactly saying that $y$ coincides with $\mathcal{B}_k$ on $[0,\varepsilon]$.

$\bullet$ Step 2 Let $y$ be a solution of $(P)$ and let $\varepsilon$ be as in step 1. Then the function $\mathcal{B}_k$ solves the Cauchy problem $$y' = F(y) = \sqrt{\frac{k-y}{y}}, \quad y(\varepsilon/2) = \mathcal{B}_k(\varepsilon/2),$$ where $F:(0,k) \to \mathbb{R}_+$ is locally Lipschitz. Since $\mathcal{B}_k$ is well-defined on $[0,s_0]$ and solves the above on the whole interval $(0,s_0)$, and since $y$ is a local solution of the same problem, $y$ and $\mathcal{B}_k$ coincide on $[0,\min(a,s_0)]$, using the fact that they are both restrictions of a unique maximal solution. If $a< s_0$, we immediately deduce statement B. If not, then $y|_{[0,s_0]} = (\mathcal{B}_k)|_{[0,s_0]}$. In particular, $y'(s_0) = 0$.

From now on, we assume that $a > s_0$

[Edit: filled in the gaps in step 3]

$\bullet$ Step 3 There are three cases : $y_a = k$, $y_a = 0$ and $y_a \in (0,k)$.

If $y_a = k$, then $y'(a) = 0$ and we conclude step $3$ with $s_\infty = a$.

If $y_a = 0$, then we use the change of variables $z(x) = y(a - x)$. Then $z$ also solves $(P)$, and by steps $1$ and $2$, it agrees with $\mathcal{B}_k$ on $[0,s_0]$. We deduce that $y$ agrees with $x \mapsto \mathcal{B}_k(a-x)$ on $[a-s_0,a]$ and $y'(a-s_0) = 0$. We cannot have $a - s_0 < s_0$ because this would mean there exists $\xi \in (a-s_0,s_0)$ such that $\mathcal{B}_k$ and $x \mapsto \mathcal{B}_k(a-x)$ coincide near $\xi$, but the first one is strictly increasing and the second, strictly decreasing. Therefore, $a - s_0 > s_0$, showing step 3 in this case with $s_\infty = a - s_0$.

Finally, suppose $y_a > 0$. Then $y'(a)$ is given by $$y'(a) = \pm \sqrt{\frac{k-y_a}{y_a}} \neq 0$$ Let us show that it is the negative sign. For this, assume by contradiction that $y'(a) > 0$. Then let $$U = \{x \in (0,a] : y'(t) > 0 \text{ for all } t\in [x,a]\}.$$ Then $a \in U$ and $\inf U \geq s_0$. Let $\xi = \inf U \in (0,a)$. Since $y'$ is increasing on $[\xi,a]$, we have $y(\xi) \leq y_a < k$. On the other hand by continuity, $y'(\xi) = 0$ so $y(\xi) = k$ : contradiction. So $y'(a) < 0$. By continuity, $y$ satisfies $$y'(x) = -\sqrt{\frac{k - y}{y}}, \quad y(a) = y_a$$ in an interval $(a-\varepsilon,a]$, where $\varepsilon$ can be chosen such that $y$ stays away from $0$ and $k$ in this interval. We can thus apply the Cauchy-Lipschitz theorem and deduce that $y$ coincides with the maximal solution to this problem, which we can exhibit. Namely, if $s^*$ is the unique solution in $(0,s_0)$ of $\mathcal{B}_k(s^*) = y_a$, then $$y(x) = \mathcal{B}_k(s^* + a - x) $$ for all $x \in [a - (s_0 - s^*),a]$. This solution was obtained intuitively by reversing the brachistrochrone and shifting it the proper amount to satisfy the boundary condition. The fact that it is indeed a solution can be verified directly from the definition of $s^*$ and the fact that $(1 + \mathcal{B}_k'^2) \mathcal{B}_k = k$. Letting $s_\infty := a - (s_0 - s^*)$, we deduce that $y'(s_\infty) = -\mathcal{B}_k'(s_0) = 0$. Furthermore, using again the sign of the derivative, we must have $a - (s_0 - s^*) > s_0$. This gives step 3 in this case.

$\bullet$ Step 4. For completeness, the lemma mentioned is the following :

Lemma Suppose that $0 < s_0< s_\infty \leq a$ are such that $y'(s_0) = y'(s_\infty) = 0$. Then $y$ is constant equal to $k$ on $[s_0,s_\infty]$.

Since $y(x) \leq k$ for all $k$ and $y'(s_0) = 0 \implies y(s_0) = k$, we have $\max_{x \in [s_0,s_\infty]} y(x) = k$. On the other hand, suppose that $m = \min_{x \in [s_0,s_\infty]} y(x) < k$. Then there is $\xi \in (s_0,s_\infty)$ such that $y(\xi) = m$, and $y'(\xi) = 0$ because $\xi$ is a global minimum on $[s_0,s_\infty]$ and $y$ is $C^1$ near $\xi$. Thus $y(\xi) = k$, contradiction.

Martin
  • 595
0

Third time's the charm?

The solution of $(1+[y'(x)]^2)\cdot y(x) = k$, $x \in (0, a)$, is unique over $Y$

Let $y \in Y$ be any solution. By dividing with $1 + y'^2$, we see that $y$ is bounded above by $k$ in $(0, a)$ and via continuity also on $[0, a]$. So assume $y_a \leq k$. Furthermore, any stationary point $s \in (0, a)$ with $y'(s) = 0$ is a maximum, since then $y(s) = k$. Notice also that because $y(x) \to 0^+$ as $x \to 0^+$, it must be that $y'(x) \to +\infty$ in the same process. By continuity of $y'$ and $y \geq 0$, the derivative starts out strictly positive. Also, $y(x) > 0$ when $x \in (0, a)$ as $k > 0$ and $y \geq 0$.

As a final preliminary, we saw that $y'$ is at first positive. It may become strictly negative at some point but can then never again be non-negative. If it could, a stationary point would occur with value less than $k$ by continuity of $y'$. Similarly, if $y' = 0$ has occured, $y'$ turning back strictly positive is not an option. It could not immediately be positive for otherwise $y$ would exceed $k$, and it could not turn negative beforehand by the case we just considered. This implies as well if $k$ is achieved and after that $y$ dips below $k$, never again is $k$ achieved. More generally, $y$ is increasing everywhere, or increasing up to a point with value $k$ and then strictly decreasing.

Proof. Let $u, v \in Y$ be solutions to the ODE. There are two possibilities: either $u, v$ do not intersect in $(0, a)$ or they intersect at least once in $(0, a)$.

  1. No intersections between $u$ and $v$ on $(0, a)$

    Then by the ODE, the derivatives $u'$ and $v'$ also cannot intersect on $(0, a)$. Define $w := u - v$. It follows that $w(0) = w(a) = 0$, so by Lagrange's mean value theorem there exists a $c \in (0, a)$ such that $w'(c) = 0$ or $u'(c) = v'(c)$, contradiction.

    Less formally: If one function has a strictly bigger derivative than another throughout the interval $(0, a)$, and they start at the same point $(0, 0)$, there is no way to meet at $(a, y_a)$.

  2. At least one intersection between $u$ and $v$ on $(0, a)$

    Let $r \in (0, a)$ be any intersection between $u$ and $v$.

    • This intersection $r$ cannot fall inside a region where the derivatives $u'$ and $v'$ are both simultaneously strictly positive or both simultaneously strictly negative. For concreteness, say both are positive. So we are in the zone $$ y'(x) = \sqrt{\frac{k}{y(x)} - 1} > 0, \qquad x \in (0, \varepsilon), $$ with $r \in (0, \varepsilon)$. Of course by intersecting, $u(r) = v(r)$. So the IVP $$ y'(x) = f(x, y) := \sqrt{\frac{k}{y(x)} - 1} > 0, \qquad x \in (0, \varepsilon), \qquad (r, u(r)) = (r, v(r)),$$ guarantees a unique solution in a small neighbourhood of $r$, say $[r - \delta, r + \delta]$ by the usual Cauchy-Peano-Picard uniqueness. To check, the partial derivative of $f$ wrt $y$ is $$\frac{\partial}{\partial y}f(x, y) = -\frac{k}{2y^2} \frac{1}{\sqrt{k/y-1}}, \qquad y \in (0, k),$$ so continuous on a compact neighbourhood of $(r, u(r) = v(r))$, guaranteeing property of Lipschitz.

      Now we can look at new IVPs for the same ODE, at $r - \delta$ and $r + \delta$, get a unique solution on a neighbourhood of those and continue similarly. Hence $u|_{[0, M]} = v|_{[0, M]}$, with $M$ being the first place where either $u$ or $v$ (and so both, by continuity) achieve the value $k$. If there is no such place, $M = a$ and we are done. If $y_a = k$, then we are done again (we cannot dip below $k$, after having achieved it, and come back again), so assume $y_a < k$.

      Henceforth, we are in the situation $$y'(x) = f(x, y) := -\sqrt{\frac{k}{y(x)} - 1} \leq 0, \qquad x \in [M, a). $$ Let $r_\infty \in [M, a)$ be the last point of intersection between $u, v$ in $[M, a)$. If no such $r_\infty$ exists, there will be an intersection $r'$ in any neighbourhood in $a$, so also in a neighbourhood where $$y'(x) = f(x, y) = -\sqrt{\frac{k}{y(x)} - 1} < 0, \qquad x \in [a - \gamma, a)$$ for both $u$ and $v$. Take $(r', u(r') = v(r'))$ for a new IVP and repeat the above argument with Cauchy-Peano-Picard. Hence $u|_{[N, a]} = v|_{[N, a]}$ where $u(N) = v(N) = 0$. And between $[M, N]$, should there be any room, $f(x, y) \equiv 0$, so in conclusion $u = v$ over $[0, a]$. Thus WLOG, the last point of intersection, $r_\infty$, will exist. No intersections in $(r_\infty, a)$, however, implies that one derivative is strictly smaller than the other over $(r_\infty, a)$ via the ODE. By the argument in point 1, $u$ and $v$ cannot intersect at both $r_\infty$ and $a$ but this is required of them. Contradiction.

    • This intersection $r$ cannot fall into a region where one derivative ($u'$ or $v'$) is zero and the other one ($v'$ or $u'$) is not zero. For then one function would have the value $k$ and the other function strictly less than $k$, hence no intersection at all.

    • This intersection $r$ cannot fall into a region where one derivative (say $u' > 0$) is strictly positive and the other one is strictly negative ($v' < 0$). Here $u$ and $v$ would meet in a region where $u$ is strictly increasing and $v$ is strictly decreasing. So there is exactly one such $r$, and it is in fact the only intersection of $u$ and $v$ anywhere in $(0, a)$ since we ruled out the other two options above. But this means that in $(0, r)$ we have $u' > v'$, and yet they claim to meet at $0$ and $r$, impossible by the argument in point 1.

    Since there is no place for $r$ to be, there is no intersection between $u$ and $v$. Contradiction.

This concludes the third attempt at a proof. $\blacksquare$

  • I'll have a close look at this some time later, right now I'm a bit confused by all the possible cases. Meanwhile, please have a look at my answer and let me know if something's wrong. – Martin Dec 20 '24 at 09:38