If I am given a matrix, for example,
$$A = \begin{bmatrix} 0.7 & 0.2 & 0.1 \\[0.3em] 0.2 & 0.5 & 0.3 \\[0.3em] 0 & 0 & 1 \end{bmatrix}$$
how do I calculate the fractional powers of $A$, say, $A^{\frac{1}{2}}$ or $A^{\frac{3}{2}}$?
If I am given a matrix, for example,
$$A = \begin{bmatrix} 0.7 & 0.2 & 0.1 \\[0.3em] 0.2 & 0.5 & 0.3 \\[0.3em] 0 & 0 & 1 \end{bmatrix}$$
how do I calculate the fractional powers of $A$, say, $A^{\frac{1}{2}}$ or $A^{\frac{3}{2}}$?
If a matrix is diagonalizable, then diagonalize it, $A=PDP^{-1}$ and apply the power to the diagonal
$$A^n=PD^n P^{-1}$$
The diagonal values are acted on individually.
octave gives:
$$P=\begin{bmatrix} 0.85065 & -0.52573 & 0.57735\\ 0.52573 & 0.85065 & 0.57735\\ 0.00000 & 0.00000 & 0.57735\end{bmatrix}$$
$$D=\operatorname{diag}(0.82361, 0.37639,1)$$ I realize this is a numerical uglyness but I don't have a symbolic manipulation software at hand from this computer. However, the eigenvalues are different so this is a diagonalization.
The square root is $$\sqrt{A}= \begin{bmatrix}0.82626 & 0.13149 & 0.04225\\ 0.13149 & 0.69477 & 0.17374\\ 0.00000 & 0.00000 & 1.00000\end{bmatrix}$$
This definition satisfies the requirement for roots that $(A^{1/p})^p=A$ for positive definite matrices (just like with $\sqrt{x}$ for scalars).
In a similar way, you can define functions on matrices through their power series. For instance, $e^A=P \exp(D)P^{-1}$ is perfectly well defined for diagonalizable matrices.
The convergence criteria and domain of these functions gets generalized and usually involves conditions for eigenvalues, positive-definiteness, symmetry, ortogonality and so on.
Note that the term square root of a matrix is sometimes used to represent a Cholesky decomposition, which instead works as $A=LL^T$ where $L$ is a lower triangular matrix. This is not the square root in the strictest sense, but it works like one for some numerical procedures.
I notice there's a scipy module that supposedly does this (scipy.linalg.fractional_matrix_power):
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.fractional_matrix_power.html
The citation there is:
Nicholas J. Higham and Lijing lin (2011) “A Schur-Pade Algorithm for Fractional Powers of a Matrix.” SIAM Journal on Matrix Analysis and Applications, 32 (3). pp. 1056-1078. ISSN 0895-4798
Someone on GitHub commented that those authors have improved their algorithm since then:
Nicholas J. Higham and Lijing lin (2013) "An Improved Schur--Padé Algorithm for Fractional Powers of a Matrix and Their Fréchet Derivatives" SIAM Journal on Matrix Analysis and Applications (Impact Factor: 1.59). 07/2013; 34(3).
I found a version of the latter on something called "Amanote Research".
I should note that, if you define fractional powers of a matrix X like so:
$X^{p/q} = \{R^p |R^q=X\}$
then sets like $X^{1/2}$ can definitely have multiple elements. E.g., if Z is a zero matrix in a dimension greater than one, then $Z^{1/2}$ has uncountably infinitely many distinct elements.
However, similar to how square roots of real numbers are often defined to only exist for positive numbers and to only be the positive root (even though there is also a negative one), I have seen, in the description of Singular Value Decomposition, a convention where square roots are defined only for matrices with an eigenbasis and all positive real eigenvalues, and are defined in exactly the way orion says (take the positive square root of each element in the diagonalization, then undo the diagonalization, so you get a matrix with same eigenvectors, but all the eigenvalues square-rooted). For such matrices, this is one element of the $X^{1/2}$ set I described above. I haven't checked what that scipy module does, but maybe it only handles cases where the definition orion gave works.