In general a computational LWE problem (and hence decisional) for sample matrix $A$, solution vector $\mathbf b$ modulus $q$, small error vector $\mathbf e$ and small secret $\mathbf s$ can be solved by solving the following close vector problem: let $\mathbf L$ be the lattice of vectors generated by the rows of the matrix
$$\begin{bmatrix}A&I\\qI&0\end{bmatrix},$$
find a vector close to $[\mathbf b^T,0]$. Because $[\mathbf b^T,0]$ is a displacement $[-\mathbf e^T,
\mathbf s^T]$ from the lattice, if the distribution of $\mathbf e$ and $\mathbf s$ is expected to be significantly less than what we would expect from a random lattice and solution, this can be computationally feasible.
If the distributions for $\mathbf e$ and $\mathbf s$ have different variances, we can weight the columns of our lattice basis to keep our close vector problem with balanced entries. For example, if the variance on the entries of $\mathbf s$ is half the variance of $\mathbf e$, we expect the absolute values of the coefficients of $\mathbf e$ to be roughly $\sqrt 2$-times bigger than those of $\mathbf s$. Accordingly we can, say, consider instead the lattice generated by the rows of
$$\begin{bmatrix}A/\root4\of 2&\root4\of 2I\\qI/\root4\of 2&0\end{bmatrix},$$
and attempt to find vectors close to $[\mathbf b^T/\root4\of2,0]$ (looking for a vector whose displacement is $[-\mathbf e^T/\root4\of 2,\root4\of 2\mathbf s]$) then if $A$ is a square matrix, we are dealing with a lattice of the same determinant as before, but where the displacement has scaled down by a factor of $\root4\of 2$, making the problem more computationally feasible.
There are a range of methods for solving close vector problems involving basis reduction combined with enumeration/sieving (and possibly Kannan embedding). The exact computational effect of parameter changes is complex and best investiagted with automated tooling such as the lattice estimator package of Alberecht et al.