The argument I found essentially copies the format of Theorem 5.22 from Axler (4th edition) about the "existence, uniqueness, and degree of minimal polynomial[s]".
By induction if dim $V = 0$, then every map is the $0$ map and can be represented by the minimal polynomial $I$. We can also directly show the result for the case of dim $V = 1$: note that every map $T$ that can be represented via a rational coefficient matrix from some basis of $V$ to itself is simply of the form $T(x*b) = k*x*b$ where $k$ is a rational scalar, and $b$ is the basis vector for $V$. Then the minimal polynomial is indeed rational as it is simply "$T - kI$" (note that the minimal polynomial cannot be of lower degree, i.e. degree $0$, as the only constant polynomial that maps every vector to $0$ is the polynomial "$0$", and that is not a monic polynomial). Now comes the induction step that will mimic 5.22:
Assume that the result holds for all $\text{dim}\: V < n$. Now let $\text{dim}\: V = n$, and consider an appropriate $T$. Let $v \neq 0$, and consider the smallest positive integer "$k$" s.t. $\{v,T(v),...,T^k(v)\}$ becomes linearly dependent:
$$c_k*T^k(v) = c_{k-1}*T^{k-1}(v) + ... + c_1*T^1(v)+ c_0*I(v)$$
Where $c_k$ may not be $0$, otherwise the assumption of having a minimal value for "$k$" is wrong. So then divide through by "$c_k$" and note we have a matrix equation:
$$T^k(v) = [T^{k-1}(v),...,v] [c'_{k-1},..., c'_(0)]^T$$
where $c'_i = \frac{c_i}{c_k}$, and where the LHS is an $n \times 1$ vector, and the RHS is an $n \times k$ matrix (call it $M$, that is, let $M = [T^{k-1}(v),...,v]$) multiplied by a $k \times 1$ vector. Now, we know that $k \leq n$ by the fact that $\text{dim}\: V = n$.
Also, we may now interpret $M$ as a matrix over the field $\mathbb{Q}$, and note that because its columns are independent over $\mathbb{R}$, they are also independent over $\mathbb{Q}$. Now we know by the independence of the columns of $M$ that $k = \text{rank of the columns of } M = \text{rank of the rows of }M$ (as the column rank of a matrix always equals its row rank).
Now, since the rows of M have rank $k$, it follows that we can find a matrix $L$ over $\mathbb{Q}$ s.t. $LM$ is the $k \times k$ identity matrix. Now on the LHS, $LT^k(v)$, whatever it may be, is a vector of rational numbers, and so it follows that "$[c'_{k-1},..., c'_(0)]^T$" is a vector of rational numbers.
Finally we now know that $$-T^k(v) + c'_{k-1}T^{k-1}(v) + ... + c'_1T^1(v)+ c'_0I(v)$$ is a rational polynomial, which we denote by "$p(T)$". Now $p(T)$ when applied to $V$ has a kernel with $\text{dim ker } p(T) \geq k$, and we now copy the idea of the proof from 5.22 to finish the induction step, i.e. to argue that the minimal polynomial of $T$ is equal to $p(T)q(T)$ where $q(T)$ is the minimal polynomial of $T$ over the smaller subspace $\text{range } p(T)$, and which, by our inductive hypothesis is a rational polynomial.
We are done....