This is a complement to Qiaochu's very nice answer. The idea is to put determinants, and algorithms run on generic objects, in a broader context, with pointers to unsolved research questions.
Rather than look at the problem of matrix inversion which leads to longer computations, let us look the homogeneous linear system $AX=0$ where $A$ is a generic $n\times n$ matrix and $X$ is a column vector. We will run the Gaussian elimination on $A$ where all the entries are treated as formal variables. Of the three row operations (multiplying a row by a nonzero number, exchanging rows, adding a multiple of a row to another), we will only use the third one. We will also only aim to turn the matrix into upper triangular form and will not bother cleaning above the diagonal as in the matrix inversion problem.
Then for $n=2$, the matrix we get is
$$
\begin{pmatrix}
a_{11} & a_{12}\\
0 & \frac{a_{11}a_{22}-a_{12}a_{21}}{a_{11}}
\end{pmatrix}\ .
$$
For $n=3$, the final matrix looks like
$$
\begin{pmatrix}
a_{11} & \ast & \ast\\
0 & \frac{a_{11}a_{22}-a_{12}a_{21}}{a_{11}} & \ast \\
0 & 0 & \frac{D_3}{a_{11}a_{22}-a_{12}a_{21}}
\end{pmatrix}\ ,
$$
with
$$
D_3=a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}
-a_{12}a_{21}a_{33}-a_{13}a_{22}a_{31}-a_{11}a_{23}a_{32}\ .
$$
In general, I believe we should get a triangular matrix where in the $(i,i)$ spot the entry is $\frac{D_i}{D_{i-1}}$ where $D_i$ is the principal minor determinant of size $i$ sitting in the top left corner of the matrix.
As in the answer by Qiaochu, we see the determinant polynomial appearing ex nihilo, just from running the Gaussian elimination algorithm, provided we do this for a generic matrix where the entries have no numerical values but are treated as formal indeterminates.
The above is, if I remember correctly (I don't have the book in front of me), the way Steven J. Leon introduced determinants in his undergraduate textbook on linear algebra.
Note that for a system $AX=0$, the main question is whether there is a nontrivial solution, i.e., other than $X=0$. The determinant gives us an iff criterion for this question. This is because we are secretly doing projective geometry where there are no goofy mishaps like expected solutions not being found because they escaped to infinity.
This remark allows us to put the determinants and the motivation for them in a much wider context. Consider $n$ homogeneous polynomials $F_1(x_1,\ldots,x_n),\ldots,F_n(x_1,\ldots,x_n)$ of respective degrees $d_1,\ldots,d_n$. Let us write them as
$$
F_i(x)=\sum_{\alpha\in\mathbb{N}^n,|\alpha|=d_i}a_{i,\alpha}x^{\alpha}
$$
where $\alpha=(\alpha_1,\ldots,\alpha_n)$ is a multiindex, with length $|\alpha|:=\alpha_1+\cdots+\alpha_n$, and we used the shorthand notation for monomials $x^{\alpha}:=x_1^{\alpha_1}\cdots x_{n}^{\alpha_n}$.
One can now ask the same question as before, i.e., when does there exist a nontrivial solution $x:=(x_1,\ldots,x_n)\neq 0$ (in an algebraic closure of the field at hand) for the system
$$
\left\{
\begin{array}{ccc}
F_1(x) & = & 0\ , \\
& \vdots & \\
F_n(x) & = & 0\ .
\end{array}
\right.
$$
It turns out there is a unique (up to scale) irreducible polynomial in the coefficients of all the $F$'s, which vanishes iff a nontrivial solution $x$ exists.
This is the multidimensional resultant ${\rm Res}_{d_1,\ldots,d_n}(a)$, where $a$ denotes the collection of all the $a_{i,\alpha}$, seen as indeterminates or formal variables.
In the particular linear case $(d_1,\ldots,d_n)=(1,\ldots,1)$, this resultant is the determinant of the matrix $A$ made of the coefficients of the $n$ linear forms $F_1,\ldots,F_n$.
The up to scale ambiguity is usually lifted by requiring the resultant to be equal to $1$ when $F_i(x)=x_i^{d_i}$, for all $i$, $1\le i\le n$.
We thus see Gaussian elimination and determinants as a special case of the much wider elimination theory, which as mentioned in the MacTutor page linked to in Qiaochu's answer, started about two thousand years ago, I believe in the eighth of The Nine Chapters of the Mathematical Art.
Another characterization of resultants, and therefore determinants, in the context of this wider elimination theory is via the notion of Trägheitsformen or inertia forms.
Consider, in the ring $\mathbb{Q}[a]$, the ideal $I$ of polynomials $R(a)$ for which there exist polynomials $G_1(a,x),\ldots,G_n(a,x)$ in $\mathbb{Q}[a,x]$, and a multiindex $\gamma\in\mathbb{N}^n$, such that the Bézout relation
$$
x^{\gamma} R(a)=F_1(a,x) G_1(a,x)+\cdots F_n(a,x)G_{n}(a,x)
$$
holds identically. Again $a$ denotes the collection of the the $a_{i,\alpha}$, and $x$ denotes the collection of the $x_i$ variables. The polynomials $F_i$ are now seen as polynomials in both the $a$ and $x$ variables, with coefficients equal to $1$.
This ideal $I$ is nonzero, prime and principal. Its generator, unique up to scale, is the resultant ${\rm Res}_{d_1,\ldots,d_n}(a)$.
Specializing this to the linear case gives another characterization of determinants.
Now here is a research problem (perhaps too difficult). Take $r$ (instead of $n$) homogeneous polynomials
$F_i(x_1,\ldots,x_n)$ with indeterminate coefficients of the form
$$
F_i(x)=\sum_{\alpha\in\mathcal{A}_i}a_{i,\alpha}x^{\alpha}
$$
where the $\mathcal{A}_i$ are some subsets of $\mathbb{N}^n$. Then run the Buchberger algorithm to find a Gröbner basis for the ideal generated by the $F_i$.
The difficulty is to invent whatever combinatorial tool is needed to explicitly keep track of the polynomials or rational functions which arise in the intermediate and last steps of the process. This relates to the study of ideals of generic forms, with one of the famous conjectures in the area being the Fröberg Conjecture on the Hilbert series of such an ideal.
One should note that running an iterative algorithm on generic objects, which sounds like a very scary proposition, can sometimes be done. For instance the Gram-Schmidt orthogonalization process results in explicit formulas involving Gram determinants. In the above elimination problem with $r=n=2$ and dehomogenizing by setting $x_2=1$, i.e., considering the univariate nonhomogeneous polynomials $f_1(z):=F_1(z,1)$ and $f_2(z):=F_2(z,1)$, one can use the Euclid algorithm in order to determine the gcd of the two polynomials. In the generic case, this results in intermediate steps involving subresultants given by explicit determinantal expressions. In case $f_2=f_1'$, one obtains the subdiscriminants which, for instance, can tell how many real roots a polynomial with real coefficients has.
For more information on Trägheitsformen, see my recent article "A combinatorial formula for the coefficients of multidimensional resultants" and references therein.
For subdiscriminants, see the book by Basu, Pollack and Roy linked to in this MO answer:
https://mathoverflow.net/questions/118626/real-symmetric-matrix-has-real-eigenvalues-elementary-proof/123150#123150