My professor has a way of doing proofs, for example, when proving that the Darboux integral and reiman have the same value, she did something like (having assumed darboux integrability):
$r - \epsilon < L(f) \le U(f) < r + \epsilon$
and said let $\epsilon \to 0$ and we have $r = U(f) = L(f)$.
Why is this mathematically and logically sound and correct to do?
Here's my rough understanding:
- we have this for all $\epsilon$.
- letting it go to 0 gets us close to it, but never equal, in a sense, like a limit.
She does not do this by definition and I'd never seen this strategy before, so I am wondering about it.