Let the quadratic form $f \in \mathbb{R}[x_1, \dots, x_n]$ be defined by
$$ f(x_1, \dots, x_n) := \sum_{i=1}^{n}{x_i^2} + a \sum_{i=1}^{n} \sum_{j=i+1}^{n} {x_i x_j}$$
where $a>0$. I want to know for which values of $a$ it is verified that $f(x_1, \dots, x_n)\ge 0$ for all $(x_1, \dots, x_n) \in \mathbb{R}^n$. And I would like to have a formal proof of it.
I am thinking about considering $ f (\textbf{x}) = {\bf x}^\top {\bf A} \, {\bf x} $, where
$${ \bf A} = \begin{bmatrix} 1 & \frac{a}{2} & \frac{a}{2} & \cdots & \frac{a}{2} \\ \frac{a}{2} & 1 & \frac{a}{2} & \cdots & \frac{a}{2} \\ \frac{a}{2} & \frac{a}{2} & 1 & \cdots & \frac{a}{2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{a}{2} & \frac{a}{2} & \frac{a}{2} & \cdots & 1 \end{bmatrix} $$
I get that the eigenvalues are $\lambda_1 = 1 + \frac{a}2(n-1)$ and $\lambda_2 = \dots =\lambda_n = 1 - \frac{a}2$. I see that only if $a>2$ I have negative eigenvalues, but I am not sure how does that relate with the sign of the quadratic form. With some graphical and numerical tests, I think for $a < 2$ I have $f > 0$, while for $a > 2$ it is possible to have negative results. Could anyone help me to have a formal proof?