Square roots are typically computed by root finding; if you want to find $\sqrt{x}$, you would use a root finding algorithm on
$$g(t) = t^2 - x$$
One such algorithm would be, for example, Newton's Method. Wikipedia has an entire page describing algorithms for root finding, and some more for computing square roots.
The unifying theme behind these algorithms is that you define a sequence $\{x_n\}_{n \in \mathbb{N}}$ such that $x_n \to x^*$, where $g(x^*) = 0$ and, depending on the algorithm and function, you can prove that it converges to the true root, and that it does so with a certain speed. The convergence properties usually depend on a good first approximation to where the root is, so not everything's rose colored. However, a priori this means that you can get arbitrarily close to the root and, for well behaved inputs, you will get the actual root (i.e. perfect squares).
As many other answers described, computers work with a floating point representation, which are limited in what they can represent to a subset of the rational numbers. This means, in practice, that most numbers can't in fact be handled by your computer, and furthermore that you have to take several precautions when working with them; "What Every Computer Scientist Should Know About Floating-Point Arithmetic" is a pretty classic introduction, which is also friendly for math students.
Computer Algebra Systems usually choose to instead work with a symbolic representation of math: instead of using actual numbers, implement the typical rules of math on symbols. In this approach, you would have a representation for $\sqrt{2}$, and a rule that tells you $\sqrt{x}^2 = x$; thus $\sqrt{2}^2 = 2$. At the end, they do have to approximate, but they have the most accurate representation possible until the very last minute, which avoids many errors.
If you are interested in reading more, "Numerical Analysis" by Richard L. Burden is a book I really liked when taking my Numerical Analysis class.
xdecimal places you can do that, and you can be sure your answer is exact up toxdecimal places. That is the difference between a computable number and an uncomputable number. – Paul Dec 20 '18 at 19:24