If you are trying to optimize a function $f$ subject to constrains $g_1=0,\ldots,g_k =0$, then the key point of the theory of Lagrange multipliers is that, at a (constrained) local extremum $x_0$, the gradient $\nabla(f)$ will be normal to the domain given by the constraints. When there are multiple constraints, this means that $\nabla(f)(x_0)$ will be a linear combination of the gradients $\nabla(g_1)(x_0),\ldots,\nabla(g_k)(x_0)$, that is, there will be $\vec{\lambda} \in \mathbb R^k$ such that
$$
\nabla(f)(x_0) = \sum_{i=1}^k \lambda_i \nabla(g_i)(x_0)
$$
which translates into a "Lagrangian"
$$
F(\vec{x},\vec{\lambda}) = f(\vec{x}) - \langle \vec{\lambda},\vec{g}(x)\rangle,
$$
where $\vec{g} = (g_1,\ldots,g_k)$. Thus the Lagrange multipliers form a vector in $\mathbb R^k$ where $k$ is given by the number of constraining functions (so in particular, while it is a vector, it is not a vector in the same space as the vector $\vec{x}$).