Working in finite-dimensional linear spaces, and knowing that dimension is independent of basis leads to many interesting properties that do not hold for infinite-dimensional spaces.
For example, every square matrix $A$ must have a minimal polynomial $m$ for which $m(A)=0$, which follows because the linear space of $n\times n$ matrices has dimension $n^2$, which means that $\{ I,A,A^2,\cdots A^{n^2} \}$ must be a linearly-dependent set of matrices. So there is a unique monic polynomial of lowest order for which $m(A)=0$. If $m(\lambda)=\lambda^k+a_{k-1}\lambda^{k-1}+\cdots +a_{1}\lambda+a_0$, then it can be seen that $A$ is invertible iff $m(0)=a_0\ne 0$. Indeed, if $a_0\ne 0$,
$$
I=-\left[\frac{1}{a_0}(A^{k-1}+a_{k-1}A^{k-2}+\cdots+a_1 I)\right]A \\
= -A \left[\frac{1}{a_0}(A^{k-1}+a_{k-1}A^{k-2}+\cdots+a_1 I)\right].
$$
So $A$ has a left inverse iff it has a right inverse, and, in that case, the left and right inverses are the same polynomial in $A$. That's most definitely not true in infinite-dimensional spaces. Most generally, $m(\lambda) \ne 0$ iff $A-\lambda I$ is invertible; and $m(\lambda)=0$ iff $A-\lambda I$ has a non-trivial kernel. So $A-\lambda I$ is non-invertible iff it has a non-trivial kernel, which consists of all eigenvectors of $A$ with eigenvalues $\lambda$. The rank and the nullity of any $n\times n$ matrix are nicely related, which also does not happen in infinite dimensional spaces, even if the kernel and complement of the range are finite-dimensional.
Then if you're working over $\mathbb{C}$, the minimal polynomial factors as
$$
(A-\lambda_1 I)^{r_1}(A-\lambda_2 I)^{r_2}\cdots(A-\lambda_k I)^{r_k}=0.
$$
Such a factoring leads to the Jordan Canonical Form, which is also something you don't generally get in infinite-dimensional spaces. And $A$ can be diagonalized iff the minimal polynomial has no repeated factors, which basically comes down to a trick with Lagrange polynomials $p_k$, which are the unique $n-1$ order polynomials defined so that $p_k(\lambda_j)=\delta_{j,k}$. Then
$$
1 \equiv \sum_{k=1}^{n}p_k(\lambda) \implies
I = \sum_{k=1}^{n}p_k(A),
$$
and $(A-\lambda_k I)p_k(A)=0$. That's how you get a full basis of eigenvectors for $A$ when the minimal polynomial has no repeated fators. The matrices $p_k(A)$ are projections onto the eigenspace with eigenvalue $\lambda_k$, and every vector can be written in terms of the ranges of these projections, which are eigenvectors. Normal (and selfadjoint) matrices $N$ are special cases where the minimal polynomial has no repeated factors because $\mathcal{N}((N-\lambda I)^2)=\mathcal{N}(N-\lambda I)$ for all $\lambda$. This algebraic formalism is not generally available for infinite-dimensional spaces.
Though a determinant is not essential for finite-dimensional analysis, it is nice, and there is no determinant for the general infinite-dimensional space.