Consider a somewhat primitive method of shuffling a stack of $n$ cards: In every step, take the top card and insert it at a uniformly randomly selected one of the $n$ possible positions above, between or below the remaining $n-1$ cards.
Start with a well-defined configuration, and then track the entropy of the distribution over the possible permutations of the stack as these shuffling steps are applied. It starts off at $0$. Initially most moves will lead to unique permutations, so we should have roughly $n^k$ equiprobable states after $k$ steps, so the entropy should initially increase as $k\log n$. For $k\to\infty$ it should converge to the entropy corresponding to perfect shuffling, $\log n!\approx n(\log n-1)$.
What I'd like to know is how this convergence takes place. I have no idea how to approximate the distribution as it approaches perfect shuffling. I computed the entropy for $n=8$ for $k$ up to $50$; here's a plot of the natural logarithm of the deviation from the perfect shuffling entropy $\log n!$:
The red crosses show the computed entropy; the green line is a linear fit to the last $30$ crosses, with slope about $-0.57$. So the entropy converges to its maximal value roughly as $\exp (-0.57k)$. For $n=7$, the slope is about $-0.67$, and for $n=9$ it's about $-0.50$. How can we derive this behaviour?
