I'm trying to derive the ADMM updates for the $\ell_1$ penalized Huber loss:
$$ \arg\min_x \phi_h \left(y - Ax\right) + \gamma\lVert x \rVert_1 $$
where
$$ \phi_h \left( u \right) = \begin{cases} \frac{1}{2}u^2, & \text{if } \mid u \mid \leq 1 \\ \mid u \mid - \frac{1}{2}, & \text{otherwise} \end{cases} $$
So far I know I need to compute the prox operator of both $ \phi_h $ and $ \lVert \rVert_1 $ and that the steps are:
$$ x^{k+1} = \arg \min_x \left(\phi_h\left(y-Ax\right) + \frac{\rho}{2}\lVert y - Ax -z^{k} + u^{k} \rVert \right) $$
$$ z^{k+1} = S_{\gamma/\rho}\left(x^{k+1} + u^{k+1} \right) $$
$$ u^{k+1} = u^{k} + x^{k+1} - z^{k+1}$$
where
$$ S_{\lambda}\left( y \right) = \mathrm{max} \left(y - \lambda, 0 \right) $$
This is from eqn 6.1. from Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers:
I'm having difficulty finding the $x^{k+1}$ step. Boyd (section 6.1.1) suggests that it will be:
$$ \frac{\rho}{1+\rho}\left(Ax - y + u^k\right) + \frac{1}{1+\rho}S_{1+1/\rho}\left( Ax - y + u^k \right) $$
But the answers to Proximal Operator of the Huber Function suggests the $j^{th}$ component of the prox operator will be:
$$ v_j = \frac{y_j-a_j x_j}{max\left(\mid y_j-a_j x_j \mid, 2 \right)} $$
Any help finding this would be hugely appreciated.