The first isomorphism theorem implies the rank-nullity theorem which gives a super nice corollary about linear transformations between finite-dimensional spaces of equal dimension:
Theorem.
Let $V,W$ be vector spaces of equal finite-dimension over a field $\Bbb{F}$. For any linear transformation $T:V\to W$, the following statement are equivalent:
- $T$ is injective
- $T$ is surjective
- $T$ is bijective (and hence a linear isomorphism).
This is a fundamental fact about linear transformations which is not true if you deal with other types of functions. This can be phrased in terms of matrices as well (and can be phrased in a variety of equivalent ways).
One of the nice consequences is that to check invertibility, you only need a one-sided inverse; the other follows immediately. So, for example, if $A,B$ are $n\times n$ matrices, then $AB=I$ automatically implies $BA=I$ and hence $A=B^{-1}$ and $B=A^{-1}$. Note very carefully that in general, for two functions, $f\circ g=\text{id}$ does not imply $g\circ f=\text{id}$.
This is of course a simple consequence, but it is used repeatedly.
Also, the rank-nullity theorem itself is super important in linear algebra: it precisely and concisely formalizes the intuitive notion that if you have $m$ equations in $n$ unknowns and only $r$ of those equations are ‘independent’, then you can eliminate $n-r$ many “free” independent variables. Or more geometrically, the solution set of $m$ many linear equations of which $r$ are independent gives an $(n-r)$-dimensional solution space. As a super concrete example, a scalar equation
\begin{align}
a_1x_1+\dots +a_nx_n&=0,
\end{align}
where not all the $a_i$’s are $0$, has an $(n-1)$-hyperplane as its solution set.
Of course, there are tons of other examples of the utility of the first isomorphism theorem, but this I think is the ‘most immediately important’ consequence.