One possible way of finding the eigenvectors and eigenvalues (eigen-decomposition) of a matrix is to write down the characteristic equation.
$$\det(A-\lambda I)=0$$
I am interested to know in more depth why this is the case and why it works.
According to Wolfram Alpha this is due to this equation.
$$(A-\lambda I)X=0$$
where $X$ is an eigenvector.
To understand the link, what we are essentially saying here is that for scalars if we have an equation like
$$a\times x=0$$
where $x\neq 0$ then this implies that $a=0$, or if we take the modulus $\left|a\right|=0$.
For higher dimensional spaces, we have something analagous:
$$A X=0$$
implies that
$$\left|A\right|=0$$
if
$$X\neq 0$$
I know how to calculate the determinant of a matrix, but I do not know why the determinant is calculated the way it is. I suspect that there is a more in depth explanation than I currently understand. In other words, I know how to mechanically perform the operations of linear algebra, but I do not know in detail why the operations are defined the way they are.
For example, the determinant of
$$ \det \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} =ad-bc $$
I suspect this has something to do with extending the concept of an object which behaves like a scalar zero in higher dimensional spaces.
