We claim that
\begin{align*}
\int_{0}^{n} |x(x-1)(x-2)\cdots(x-n)| \, \mathrm{d}x
= \frac{2(n!)}{\log^2 n}(1 + o(1)) \tag{*}
\end{align*}
as $n\to\infty$. In particular, OP's limit diverges to $\infty$.
Step 1. Note that, if $k \in \{2, 3, \ldots, n-1\}$ and $x \in [k-1, k]$, then
$$ |x(x-1)(x-2)\cdots(x-n)| \leq k!(n+1-k)! = \frac{(n+1)!}{\binom{n+1}{k}}. $$
Now by using the unimodality of the binomial coefficient,
\begin{align*}
&\int_{1}^{n-1} |x(x-1)(x-2)\cdots(x-n)| \, \mathrm{d}x \\
&\qquad
\leq (n+1)! \sum_{k=2}^{n-1} \frac{1}{\binom{n+1}{k}}
\leq (n+1)! \left[ \frac{2}{\binom{n+1}{2}} + \frac{n-4}{\binom{n+1}{3}} \right],
\end{align*}
which is $\mathcal{O}(n!/n)$.
Step 2. For $x \in [0, 1]$, Stirling's approximation shows that
\begin{align*}
|x(x-1)(x-2)\cdots(x-n)|
&= \frac{x \Gamma(n+1-x)}{\Gamma(1-x)}
= \frac{x n^{-x}}{\Gamma(1-x)} n! (1 + \mathcal{O}(n^{-1})),
\end{align*}
where the implicit bound for the error term $\mathcal{O}(n^{-1})$ is uniform in $x$. So,
\begin{align*}
&\int_{0}^{1} |x(x-1)(x-2)\cdots(x-n)| \, \mathrm{d}x \\
&= n! (1 + \mathcal{O}(n^{-1})) \int_{0}^{1} \frac{x n^{-x}}{\Gamma(1-x)} \, \mathrm{d}x \\
&= \frac{n!}{\log^2 n} (1 + \mathcal{O}(n^{-1})) \int_{0}^{\log n} \frac{x e^{-x}}{\Gamma(1-x/\log n)} \, \mathrm{d}x
\end{align*}
However, by the dominated convergence theorem,
$$ \int_{0}^{\log n} \frac{x e^{-x}}{\Gamma(1-x/\log n)} \, \mathrm{d}x \to \int_{0}^{\infty} x e^{-x} \, \mathrm{d}x = 1 $$
as $n \to \infty$. Finally, combining this with the previous step, we conclude $\text{(*)}$.