I'm trying to learn Taylor expansions and was watching a tutorial here. In the tutorial, Taylor approximation is introduced by first showing Maclaurin series, which is basically taylor series at $x=0$. The introduction seems intuitive to me:
First suppose we want to approximate a function $f(x)$ with polynomials at $x=0$ given derivative of $f(x)$ exists for any order at $x=0$, then we can start with very simple approximation at $x=0$: \begin{equation} f(x) \approx f(0) \end{equation} then add more and more higher order terms, \begin{equation} f(x) \approx f(0) + f'(0)x + f''(0)\frac{x^2}{2} + ... \end{equation} this makes sense, since the RHS exactly matches the LHS for any order derivatives at $x=0$, here is a picture of taylor approximation of $sin(x)$ at $x=0$ up to order 3: image or use wolfram.
My question is, we are making approximation of $f(x)=sin(x)$ locally at/around $x=0$, but (as you can see from the plot) why as the more higher order terms being added, the approximation also become more and more like $f(x)$ even far away from $x=0$, intuitively why this is the case? Because what I understand is, the approximation is only derived locally around $x=0$. Does this also imply that with enough higher order terms Taylor expansion at $x=0$ and $x=a$ where $a\ne0$ are just the same? Furthermore, with enough terms does taylor series approximate $f(x)$ everywhere and not just $x=0$ anymore?