Update: Even in the $n$ even case, the association argument doesn't quite work because I got an inequality flipped and forgot about some signs - the amount by with the negative terms increase maybe more or less than the amount by which the associated positive terms decrease. However, it's easy to write down an expression for the difference which I expect to be small, and perhaps it will be possible to bound it by the decrease of the other terms. I'll get back to this when I have more time later.
I can prove this in the case $n$ is even, and I provide a modification which should prove the case when $n$ is odd but I feel a bit uneasy about it (though I don't see a mistake. I haven't checked explicit examples to verify it though).
This can be viewed as a random walk on a 3D lattice by associating to each point the triple $(x_1,x_2,x_3)$ (i.e. the coordinates tell you how many dollars each player who is not the dealer has), where the legal steps are $\{(3,-1,-1), (-3,1,1), (-1,3,-1), (1,-3,1), (-1,-1,3), (1,1,-3)\}$.
If, after this walk, you are at $(x,y,z)$, then the dealer has $-x-y-z$ dollars, and the condition that the dealer has the most dollars is equivalent to the equations
$$\begin{gather}
-x-y-z>x\\
-x-y-z>y\\
-x-y-z>z.
\end{gather}$$
This is an open region bounded by 3 hyperplanes that each contain the origin.
Taking a linear transformation, we can map the legal steps to the unit vectors, and this sends the 3 hyperplanes to 3 other hyperplanes. To be specific, I'll pick the linear transformation so that $(3,-1,-1)\mapsto(1,0,0), (-1,3,-1)\mapsto(0,1,0)$, etc.
We can also specify what the hyperplanes are: If either $x_1$ or $x_2$ loses 3 dollars, the dealer is tied with $x_3$ for the most dollars, so one of the hyperplanes is the $z=0$ hyperplane. Similarly, the other hyperplanes are the $x=0$ and $y=0$ hyperplanes, so the region in which the dealer has the most dollars is one of the 8 regions bounded by these 3 hyerplanes.
By symmetry, the probability of being in this region is 1/8 the probability of not being on one of the 3 hyperplanes.
The problem is now equivalent to: Consider the usual random walk on a 3D cubical lattice, starting at the origin. Show that, after $n+2$ steps, the probability of at least one of the coordinates being 0 is strictly less than that for $n$ steps. (So far, this is basically equivalent to what lonza did, except without the calculations.)
First I'll show that the probability of at least one of the coordinates being 0 decreases when going from $n$ steps to $n+2$ steps when $n$ is even, and then I'll modify the proof for $n$ odd. I don't see anything wrong with the proof, but I haven't verified it by a concrete example.
It suffices to show this claim on all 3D torus grid graphs (i.e. the cartesian product of 3 cycle graphs), because the first $n$ steps of a random walk on the cubical 3D lattice update the probability distribution in the same way on a sufficiently large torus grid graph.
Consider a $p,q,r$ torus grid graph (i.e. the Cartesian product of $C_p,C_q,C_r$), and label the vertices with elements of $\mathbf{Z}/p\mathbf{Z}\times\mathbf{Z}/q\mathbf{Z}\times\mathbf{Z}/r\mathbf{Z}$. Consider the $pqr$-dimensional $\mathbf{C}$-vector space of formal linear combinations of the vertices. To write an element of this vector space as a column vector, order the vertices as follows: The first vertex is $(0,0,0)$, followed by all vertices labelled $(x,0,0)$ for some nonzero $x$ (in any order), followed by the vertices labelled $(0,y,0)$ for some nonzero $y$, then $(0,0,z)$ for some nonzero $z$, then $(x,y,0)$ for nonzero $x,y$, then $(x,0,z)$ for nonzero $x,z$, then $(0,y,z)$ for nonzero $y,z$, and finally all remaining points in any order.
Let $A$ be the update matrix for the Markov Chain defined by taking one step in a random walk on this graph. Let $v$ denote the vector with 1s at all entries that correspond to vertices with at least one entry in the label that is 0. Let $w$ denote the vector with 1 in the first entry and 0 elsewhere (which corresponds to the point with label $(0,0,0)$). Then the probability that, after $n$ steps of the random walk on this torus grid graph, we end up at a vertex with a 0 somewhere in its label is $v^TA^nw$.
We will now express $A=UDU^*$ where $D$ is diagonal, $U$ is unitary, and $U^*$ denotes conjugate transpose (we know by the spectral theorem and the fact that $A$ is symmetric that we can pick $U$ to be real and orthogonal, but we will use a different matrix. However, this gives us that the diagonal entries in $D$ are all real, which we will use later). This is equivalent to picking matrices $U$ and $D$ such that each column of $U$ is an eigenvector of $A$ of unit norm, the columns of $U$ are orthogonal (i.e. have inner product 0, where the inner product of vectors $v_1,v_2$ is $v^*_1v_2$), and $D$ is a diagonal matrix where the $k$th diagonal entry is the eigenvalue of the $k$th column of $U$.
To do this, associate each column of $U$ to a vertex of the graph using the same ordering as before. Now it suffices to associate to each vertex of the graph an eigenvector of unit norm. We will first associate each vertex to an eigenvector, and then we will specify the normalization.
Pick primitive roots of unity $\zeta_p,\zeta_q,\zeta_r$ (where $\zeta_p$ is a primitive $p$th root of unity, and similar for $\zeta_q,\zeta_r$). We define the association of a vertex to an eigenvector as follows: Consider an arbitrary vertex, and let its label be $(x_0,y_0,z_0)$. Associate to it the eigenvector where the coefficient of the vertex $(x,y,z)$ is $\zeta_p^{x_0x}\zeta_q^{y_0y}\zeta_r^{z_0z}$. This is well-defined because $\zeta_p^t$ only depends on $t\pmod{p}$, and similar for $q,r$. To see that this is an eigenvector, the update locally at any given point is the same as at any other point except scaled by a constant complex number.
To define the normalization, all entries in this vector are roots of unity, and therefore have unit norm, so the norm of the eigenvector is $\sqrt{pqr}$. So divide the vector by $\sqrt{pqr}$.
This defines our matrix $U$, and $D$ comes from taking the corresponding eigenvalues.
Now $v^TA^nw=v^TUD^nU^*w$, and we want to show that this decreases when we increase $n$ to $n+2$. To avoid dealing with the normalization constant, we will actually show $pqrv^TUD^nU^*w=\sqrt{pqr}v^TUD^n\sqrt{pqr}U^*w$ decreases.
So far we haven't assumed $n$ is even, but if $n$ is even then we can write $n=2m$ for some $m$. Then $v^TA^nw=v^TU(D^2)^mU^*w$, and now the diagonal entries of $D^2$ are all positive reals, which will be useful later. Also, all diagonal entries are at most 1 because the eigenvalues of any stochastic matrix have absolute value at most 1. Also, by the Perron Frobenius theorem, if at least one of $p,q,r$ is odd then there is only one eigenvalue of 1, and otherwise there are only 2.
We now compute $\sqrt{pqr}v^TU$ and $\sqrt{pqr}U^*w$. Computing $\sqrt{pqr}U^*w$ is easy - it is the all 1s vector, because each entry corresponds to the coefficient of $(0,0,0)$ in one of the eigenvectors we defined, and they are all 1 (and the $\sqrt{pqr}$ cancels out the the normalization factor from earlier).
To compute $\sqrt{pqr}v^TU$, it is the vector where the coefficient of a vertex $(x_0,y_0,z_0)$ is given by looking at the eigenvector associated to that vertex and taking the sum of the coefficients of all vertices in the graph where the label contains a 0. We can compute this using the principle of inclusion exclusion: letting $l_{x_0,y_0,z_0}(x,y,z)$ denote the $\sqrt{pqr}$ times the label of $(x,y,z)$ under the eigenvector associated to $(x_0,y_0,z_0)$, the sum equals
$$\sum_{y,z}l_{x_0,y_0,z_0}(0,y,z)+\sum_{x,z}l_{x_0,y_0,z_0}(x,0,z)+\sum_{x,y}l_{x_0,y_0,z_0}(x,y,0)-\sum_{z}l_{x_0,y_0,z_0}(0,0,z)-\sum_{y}l_{x_0,y_0,z_0}(0,y,0)-\sum_{x}l_{x_0,y_0,z_0}(x,0,0)+l_{x_0,y_0,z_0}(0,0,0).$$
If none of $x_0,y_0,$ or $z_0$ are 0, then this sum is 1 - indeed, the sum of $l_{x_0,y_0,z_0}(x,y,z)$ along any line parallel to an axis is 0, because the sum of the powers of any non-1 root of unity is 0, so the only nonzero term out of the above 7 is $l_{x_0,y_0,z_0}(0,0,0)$.
If $x_0=0$ but $y_0,z_0\ne0$, then the sum is $-(p-1)$, because the sum over a line parallel to the $y$- or $z$- axis is still 0, so the first 5 terms are 0, but the sum over a line parallel to the $x$-axis passing through $(0,y,z)$ is $p\zeta_q^{y}\zeta_r^z$, so the sixth term is $-p$, and the seventh term is $l_{x_0,y_0,z_0}(0,0,0)=1$. Similarly, if $y_0=0$ but $x_0,z_0\ne0$, the sum is $-(q-1)$, and if $z_0=0$ and $x_0,y_0\ne0$ the sum is $-(r-1)$.
If $x_0=y_0=0$ but $z_0\ne0$, the sum is $(p-1)(q-1)$ by a similar analysis, and symmetrical things hold for when another pair of coordinates is 0.
If $x_0=y_0=z_0$ then the sum is just the number of points, since $l_{x_0,y_0,z_0}(x,y,z)=1$ for all $(x,y,z)$, so the sum is $pqr-(p-1)(q-1)(r-1)$.
To show that $v^TU(D^2)^mU^*w$ decreases when increasing $m$, we look at the sum componentwise. Each vertex of the graph corresponds to 1 addend (noting that $D^2$ is diagonal, and the fact that $U^*w$ has all equal entries implies that the sum is just the average of the entries in $v^TU(D^2)^m$), and the negative addends are the ones that correspond to vertices where the sum we just computed is negative, i.e. the vertices where exactly one coordinate of its label is 0. Because each entry in $D^2$ is between 0 and 1 (inclusive), all positive addends decrease in magnitude, decreasing the total sum. The negative addends also decrease in magnitude, but we can account for this as follows: Associate to each vertex with label $(x,y,0)$ the vertex $(x,0,0)$. The amount by which multiplying the coefficients of $(x,y,0)$ by their corresponding eigenvalues increases the sum is less than the amount by which multiplying the coefficients of $(x,0,0)$ by their corresponding eigenvalues decreases the sum; the reason is that the eigenvalue corresponding to $(x,y,0)$ is less than the eigenvalue corresponding to $(x,0,0)$ (the former eigenvalue is $\zeta_r^x+\zeta_r^{-x}+\zeta_q^y+\zeta_q^{-y}+2$, and the latter replaces $\zeta_q^y+\zeta_q^{-y}$ with 2, which is greater), and each vertex of the form $(x,0,0)$ is mapped to by $n-1$ vertices, so the sum of the original values is the same. Similarly, associate to each vertex with label $(x,0,z)$ the vertex $(0,0,z)$, and associate to each vertex with label $(0,y,z)$ the vertex $(0,y,0)$. This completes the proof in the case that $n$ is even.
In the case that $n$ is odd, it suffices to show that, if instead of starting the random walk at the origin, we start at a point 1 step away from the origin, it holds that the probability that, after $n$ steps, the probability that at least one coordinate is 0 is strictly larger than that after $n-2$ steps, where we can assume $n$ is even. The reason this suffices is that, by symmetry, the probability that at least one coordinate is 0 after $n$ steps starting from the origin is the same as the probability that at least one coordinate is 0 after $n-1$ steps starting from one step away from the origin.
The probability that, after $n$ steps starting at 1 step away from the origin, at least one of the coordinate is 0 is the same as the probability that, after $n$ steps starting at the origin, either the $x$ or $y$ coordinate is 0 or the $z$ coordinate is 1. This means we can adjust our computation above by changing $v$ to have 1s in positions corresponding to vertices in our new set. The computation above stays the same, except now the coefficient of $(x,y,z)$ is multiplied by $\zeta_r^z$.
Consider taking the sum over the line parallel to the $z$-axis passing through $(x,y,0)$ of the new coefficient with the corresponding eigenvalue. It is
$$\sum_{i=0}^{r-1}\zeta_r^i(\zeta_r^i+\zeta_r^{-i})^{2m}$$
which is 0 because the constant term when expanding $\zeta_r^i(\zeta_r^i+\zeta_r^{-i})^{2m}$ is 0 (here, we assume $r>2m$, which is fine - this just means that if we want to show that the claim is true up to $2m+1$ steps, we need to pick $r>2m$).
We can then break up the sum as follows: back in the origin case (where we start at the origin instead of 1 step away), start with the vector with 1 at every vertex. Add the vector with $-n$ at each vertex with $x$-coordinate 0; add similar vectors for 0 $y$-coordinate and 0 $z$-coordinate (so the origin how has coefficient $-3n+1$). Add the vector with $n^2$ at each vertex with $x$- and $y$-coordinate 0, and similar for other pairs.
If we scale the coefficient of $(x,y,z)$ by $\zeta_r^z$, multiply the coefficient of each vertex by its corresponding eigenvalue raised to the $2m$, and take the sum, this is the quantity we want to show decreases as we increase $m$.
Starting with the all 1s vector everywhere, we can break this up into a sum over lines parallel to the $z$-axis, so that component contributes 0 regardless of $m$ (as long as $r$ is large enough).
A similar argument shows that the vector with $-n$ where $x=0$ contributes 0, and similar for the vector with $-n$ where $y=0$, and also the vector with $n^2$ where $x=y=0$.
We are left with the vector with $-n$ where $z=0$, and the vector with $n^2$ where $x=z=0$, and the vector with $n^2$ where $y=z=0$. But we can use the argument from before where we associate to each vertex $(x,y,0)$ (where $y$ can be 0) the vertex $(x,0,0)$, and association works out the same way as before because we are only dealing with vertices where $z=0$.