In his paper "A Probabilistic Algorithm for $k$-SAT and Constraint Satisfaction Problems", Schöning gives a randomized algorithm for $k$-SAT. The analysis conditions on the Hamming distance between a fixed true assignment $a^{*}$ and the initial guessed assignment $a$, say $j$. In every step we flip the value of a variable from a violated clause, so the distance is decreased by 1 with probability at least $\frac{1}{k}$, and increased by 1 with probability at most $1-\frac{1}{k}$. It is then suggested to think of the process as a Markov chain with states $0,1,...,n$ indicating the Hamming distance.
My first question is, why is it a Markov chain? It seems that the transition probabilities do not depend on the current distance only (but also the current assignment), even though they are surely bounded by $\frac{1}{k}$ and $1-\frac{1}{k}$.
Next, the probability of success (reaching from state $j$ to state $0$) is estimated by a walk with $i$ steps in the "wrong" direction (flips which increase the Hamming distance), and $i+j$ "right" steps (reducing the distace to $0$ after $2i+j$ steps). This gives $$ q_j \geq \sum_{i=0}^j \binom{j+2i}{i}\cdot\frac{j}{j+2i}\cdot\left(1-\frac{1}{k}\right)^i\cdot\left(\frac{1}{k}\right)^{i+j},$$ the first 2 factors count all such walks in the Markov chain, and the last are the transition probabilities.
My second question is about these probabilities - why can we simply substitute the bounds $\frac{1}{k}$ and $1-\frac{1}{k}$? For example, if the probability $p$ of a "right" move (correct flip) happens to be very close to 1, then $(1-p)^i\cdot p^{i+j}$ is actually smaller. I guess that in this case the success probability is high anyway, but I'll be glad to see a formal proof for this bound.