Background
Consider the set of integers $\{1,\dots,n+1\}$ and a set of probabilities $p_1,\dots, p_n \in(0,1)$. We now define a random walk/Markov chain on these states via the following transition matrix $$ M = \begin{pmatrix} 1-p_1 & p_1 & 0 & \dots & &0 \\ 1-p_2 & 0 & p_2 & 0 & & \vdots\\ 0 & 1-p_3 & 0 & p_3 & & \\ \vdots & \ddots & \ddots & \ddots & \ddots& & \\ 0 & & &1-p_n & 0 & p_n \\ 0 &\dots & && 0& 1 \end{pmatrix}. $$ This represents a process where the state $k$ is increased by $1$ with probability $p_k$ and decreased by $1$ with probability $1-p_k$. The boundary conditions are that the state $1$ stays the same with probability $1-p_1$ and the state $n+1$ is an absorbing state. I am interested in the expected stopping time given that the process starts at the state $1$: $E(\tau|X_0=1)$, where $\tau = \min~\{k:X_k=n+1\}$. I am not interested in explicitly computing this value, but it may be helpful.
Problem
Now consider the case where the $n$ forward transition probabilities are permuted. That is, given $\sigma \in S_n$, the transition matrix is now $$ M(\sigma) = \begin{pmatrix} 1-p_{\sigma(1)} & p_{\sigma(1)} & 0 & \dots & &0 \\ 1-p_{\sigma(2)} & 0 & p_{\sigma(2)} & 0 & & \vdots\\ 0 & 1-p_{\sigma(3)} & 0 & p_{\sigma(3)} & & \\ \vdots & \ddots & \ddots & \ddots & \ddots& & \\ 0 & & &1-p_{\sigma(n)} & 0 & p_{\sigma(n)} \\ 0 &\dots & && & 1 \end{pmatrix}. $$ For a given permutation, denote the expected stopping time as $T(\sigma)$. My question is, is there a good way to find a permutation that minimizes $T$? That is, find $\sigma^\star = \mathrm{argmin}_{\sigma\in S_n} T(\sigma)$. Dynamic programming and combinatorial optimization routines are likely viable on this problem for small enough $n$, but I would prefer a proof if possible.
Some Thoughts
Most of my knowledge on computing expected stopping times of random walks deals with IID state transitions so they are of little use here. It wouldn't surprise me if some sort of greedy arrangement were optimal, but I am struggling to reason with this process without resorting to counting possible walks on the state space which gets into some messy combinatorics pretty quickly. I have been able to show a crude lower bound of $$ n + \frac{1-p_{\sigma(1)}}{p_{\sigma(1)}^2}\prod_{k=1}^n p_k < T(\sigma), $$ which suggests that maximizing $p_{\sigma(1)}$ by be important, but this is far from sharp.
I would hope that there is something from the theory of stochastic processes regarding the expectation that can simplify this analysis. Thanks for reading and I look forward to your responses.
Edit
When investigating possible paths and computing $P(\tau=n+m)$ for small $m$, I have come across a few helpful observations.
In terms of sheer number of possible paths, the state $1$ is the most common state. This is not unexpected, since every path starts there and it is the only state that can be attained consecutively. Maximizing $p_{\sigma(1)}$ seem would likely minimize the time spent stuck at this state in long paths, lowering the overall expectation.
Given a state $2 \leq k \leq n$, the probability of decreasing then returning within some fixed number of steps is proportional to $p_{\sigma(k-1)}(1-p_{\sigma(k)})$. When just considering these two transition probabilities, this quantity is maximized when $p_{\sigma(k-1)} > p_{\sigma(k)}$. Extending this logic to every interior state may indicate that a nonincreasing arrangement is optimal.
Final Edit:
As the answers indicate, it is unlikely that any general theoretical treatment of this problem will result in an optimal solution, but the reduction to a mixed integer linear programming problem is a considerable development that renders this problem far more tractable than the combinatorial framing initially indicated. I have awarded the bounty for Amir's answer containing the details on the MILP problem and some example solutions showing some nonintuitive results. Thanks for all of the discussion and feedback on this problem.