I remember (rightly or wrongly) learning at school in the 80s that the square root of a number had two roots, positive and negative. For example, √25 = ±5. That is, the square root as I learnt it was quite literally the reverse operation of the square: both 5² and (−5)² are equal to 25, so it followed the roots of 25 were 5 and −5. And that seemed entirely logical.
Some years later, however, I encountered the concept of the Principal Square Root, and was told that by definition the square root function, √, referred only to the positive root. That posed no problem mechanically; I just remembered the definition as a rule, and it was enough to get me through engineering and computing degrees. However, the justification always pestered me: if we know logically that two roots exist, why was this function definition of a square root introduced?
I have since heard from others of my era and before, including those with mathematics majors, that they too learnt that the square root function had two roots, positive and negative. I wonder, therefore, whether this is a relatively new (albeit decades-old) convention that has been introduced for some reason.
Any way I look at it, the definition seems to be introducing arbitrary limitations on our calculations: (−5)² = 25 but √25 is only +5. Is there a logical, mathematical justification for this that I am not seeing?