The answer seems to be: The problem is not particularly hard to solve, but there are no standard solutions. Though there are many ways to do this, I feel like @Royi's anwer is most "correct", since it reduces the problem to a simple linear programming problem, as opposed to the more general IRLS-solution which can be used to solve much harder problems. However, @royi's answer is short and can be a bit confusing, so I wanted to elaborate a bit.
Firstly, why is
$$
\arg \min_x \|Ax-b\|_1
$$
equivalent to
$$
\begin{align*}
\arg \min_{x, t} \quad & \boldsymbol{1}^{T} t \\
\text{subject to} \quad &-t \preceq A x - b \preceq t
\end{align*}
$$
?
Because $\|Ax-b\|_1$ is the sum of the absolute values of the vector $r=Ax-b$, and, at the optimal solution, $t_{opt}=|r|$ (where i use $|\cdot|$ to denote the element-wise absolute value). It requires a little bit of thought, but you can probably convince yourself that $-t \preceq r \preceq t$ is equivalent to $|r| \preceq t$. So the elements of $t$ must be equal to or greater than the absolute value of the corresponding elements of $r$, but, since we are minimizing the sum of the elements of $t$, they must take the smallest value possible, therefore they are not greater than but equal to the elements of $|r|$ at the solution $t_{opt}$.
Secondly, how do we convert this into a standard linear programming problem, where we are optimizing for $x$?
According to the problem statement, the cost function $\mathbf{1}^Tt$ should be optimized for both $x$ and $t$, but the function only contains $t$. For a practical programming approach, we need to stack $x$ and $t$ into a single vector $q=[t^T\quad x^T]^T$ which will be the optimization target. We can then rewrite the objective function as
$$
\arg\min_q \left[\begin{array}{c}\mathbf{1}\\ \mathbf{0}\end{array}\right]^Tq.
$$
We also need to rewrite the constraints in terms of $q$. Treating the upper and lower bounds separately, we have $-t\preceq Ax-b \implies -t -Ax\preceq -b\implies [-I\quad -A]q\preceq -b$. Equivalently, we have $Ax-b\preceq t\implies Ax-t\preceq t\implies [-I\quad A]q \preceq b$.These can be combined to a single inequality
$$
\left[\begin{array}{cc}-I & -A \\ -I & A\end{array}\right]q\preceq \left[\begin{array}{c}-b \\ b \end{array}\right]
$$
This form of the objective function and constraints in terms of $q$ can be plugged into a linear programming solver, and the optimal $x$ can be extracted as the final elements of the optimal $q$.
The following python code does this using scipy.optimize.linprog:
import scipy.optimize as opt
def linear_minimum_mean_absolute_error(A, b):
n = len(b)
m = A.shape[1]
c = np.zeros(n + m)
c[0:n] = 1.0
B = np.concatenate((np.concatenate((-np.eye(n), -A), 1),
np.concatenate((-np.eye(n), A), 1)), 0)
w = np.concatenate((-b, b))
res = opt.linprog(c, A_ub=B, b_ub=w, bounds = None)
return res.x[n:]