There are different ways to define what a convex optimization problem is but most of the time it is one of two things:
it has a convex objective function (as linear least-squares problems do) and linear constraints only (equality and/or inequality). This is a definition that is numerically oriented.
it has a convex objective function, linear equality constraints (if any) and inequality constraints that may be written in the form $c_i(x) \leq 0$ where each constraint function $c_i$ is a convex function. This definition is more general and ensures that the feasible set is a convex set. In turn, this ensures that any local minimizer is also a global minimizer.
So it's easy to determine whether a problem is convex or not.
See my answer to your previous post for references to numerical methods for constrained linear least-squares problems. You'll very rarely find analytical solutions to a nonconvex problem. For convex problems, you know that the KKT conditions characterize any global minimizer. For instance, the KKT conditions of
$$
\min \tfrac{1}{2} \|Ax-b\|^2_2 \quad \text{subject to} \
\tfrac{1}{2} \|x\|^2 \leq \tfrac{1}{2} \Delta^2, \ x \geq 0
$$
(where I introduced those $\tfrac{1}{2}$ to cancel out when I differentiate) are:
$$
A^T (Ax-b) - \lambda x - z = 0, \
\|x\| \leq \Delta, \ \lambda (\Delta - \|x\|) = 0, \ (x,\lambda,z) \geq 0, \ x_i z_i = 0 \ (i = 1, \ldots, n),
$$
where $\lambda$ and $z$ are Lagrange multipliers.