Let $\epsilon_n=|x_n-L|$ be the error on the $n$th step. Supposing the claimed limit does indeed hold, this means
$$\frac{\epsilon_{n+1}}{\epsilon_n^2}\sim\mu\implies\epsilon_{n+1}\simeq\mu\epsilon_n^2$$
i.e. the error on the next step is roughly a quadratic function of the error on the current step (hence the name). Similarly for linear convergence,
$$\epsilon_{n+1}\simeq\mu\epsilon_n$$
is a linear function of the current error.
Visually, however, one cannot easily "see" the value of $\epsilon_n$, but rather $\log(\epsilon_n)$, the number of digits accurate. Taking the logarithm of both sides, one may see that quadratic convergence implies
$$\log(\epsilon_{n+1})\simeq2\log(\epsilon_n)+\log(\mu)\simeq2\log(\epsilon_n)$$
meaning that the amount of digits accurate per iteration will double. You may obtain the order of convergence by observing the limit
$$\frac{\log(\epsilon_{n+1})}{\log(\epsilon_n)}\to2\quad(\text{or 1 for linear convergence})$$
with the additional caveat that the second term, $\log(\mu)$, is bounded (and less than $0$ for linear convergence), though usually if it isn't one doesn't worry too much about it.
For example, let us estimate $\sqrt2$ using Newton's method starting from $x_0=1$.
\begin{array}{c|c}
n&x_n\\\hline
0&1.000000000\\
1&1.500000000\\
2&1.\underline{41}6666666\\
3&1.\underline{41421}5686\\
4&1.\underline{414213562}\\
5&1.414213562
\end{array}
It is easy to visually see that the number of digits accurate will roughly double per iteration.
If your question concerns why Newton's method (usually) converges quadratically, the proof follows from a quick Taylor expansion:
$$f(L)=f(x)+f'(x)(L-x)+\underbrace{\frac12f''(\xi)(L-x)^2}_{\mathcal O(\epsilon_n^2)}$$
Letting $f(L)=0$ and solving for $L$ from the second term leads to
$$L=\underbrace{x-\frac{f(x)}{f'(x)}}_{x_\mathrm{newton}}-\underbrace{\frac12\frac{f''(\xi)}{f'(x)}(L-x)^2}_{\mathcal O(\epsilon_n^2)}$$
which shows that
$$\epsilon_{n+1}=|L-x_\mathrm{newton}|=\left|\frac12\frac{f''(\xi)}{f'(x)}(L-x)^2\right|\simeq\mu\epsilon_n^2$$