This is what I would consider a non-rigorous way of introducing the derivative. The complete, rigorous approach requires a formal definition of limits.
Without going too deep on the topic, I would offer the following idea (which is based on the formal definition but still presented casually):
- If we have something that we can approximate, and if we have a way of making the error of that approximation as small as we like by measuring things on an increasingly small scale, then by taking that approximation error down to zero we get the actual value of the thing we're measuring.
So in other words, we have this expression $\frac{\delta y}{\delta x} = 2x + \delta x$ which is true for any value of $\delta x$ except possibly zero. We are saying that $\frac{dy}{dx} \approx \frac{\delta y}{\delta x}$, meaning that the derivative is approximately equal to this other expression, and when $\delta x$ is small we expect the error on the approximation to also be small. So if we make $\delta x$ arbitrarily small, we can get $\frac{\delta y}{\delta x}$ arbitrarily close to being equal to $2x$, and so we define the limit, as $\delta x$ approaches zero, to be exactly this value.
Note that here I made a distinction between the "approximation" using $\delta x$ and $\delta y$, and the "limit" using $dx$ and $dy$. Not distinguishing between the two is one of the aspects that makes the approach you've shown less rigorous, although there are systems where that kind of manipulation is perfectly fine. It's also important that the "error" term in the approximation has to shrink as $\delta x$ shrinks - terms like $\delta x, (\delta x)^2, (\delta x)^{100}$ all vanish when $\delta x$ vanishes, but $\frac{1}{\delta x}$ gets arbitrarily large as $\delta x$ gets arbitrarily small, so if after cancelling everything out you have a term like that then you definitely can't expect the limit to work properly.