So I have the following optimization problem $$ \begin{aligned} \max_{x:=(x_1,\dots,x_n)^T\in\mathbb{R}^n} \sum_{i=1}^n x_i \\ \text{s.t. }\quad \text{ det} G(x) = 0 \end{aligned} $$
were $G(x) = I+\sum_{i=1}^n G_ix_i$ with $G_i\in\mathbb{R}^{m\times m}$ positive definite matrices.
This looks similar to some problems from "Convex Optimization" and "Linear Matrix Inequalities in System and Control Theory" by Stephen Boyd, such as the maxdet problem. However I have not been able to study the solution to such problem, even-tough I don't even have inequality constraints. I built the Lagrangian as $$ \mathcal{L} = \sum_{i=1}^n x_i+ \lambda\text{det} G(x) $$ But when I take the gradient of $\mathcal{L}$ using Jacobi's formula (https://en.wikipedia.org/wiki/Jacobi%27s_formula) it seems that I require $G(x)$ to be invertible. However, the constraint $\text{det} G(x)=0$ forbids this. I know that even if $G(x)^{-1}$ does not exists, its adjugate does exists. However, I don't see how this simplify the problem due to the expresion of $G(x)$. I expected to end up with some LMI condition or something like that.
Do suggest any other strategy to tackle this problem?
EDIT. Working a little bit more. Computing $\frac{\partial}{\partial x_i}\mathcal{L} = 0$ gives this: $$ \text{tr}(G(x)^*G_i) = -1/\lambda $$ where $G(x)^*$ is the adjugate of $G(x)$. Or equivalently $$ \text{tr}(G(x)^*G_i) = \text{tr}(G(x)^*G_j), \ \ \forall i,j \in \{1,\dots,n\} $$
So, I end up with $n-1$ equations of the form $\text{tr}(G(x)^*(G_i-G_j))=0$ and the last equation is just $\text{det }G(x)=0$. Is that it? $n$ equations and $n$ variables and plug that into the computer? This is somewhat unsatisfying for me. I'm trying to find some expression for the solutions which might give more insight on how the solutions look like.
Do you see any improvement in this formulation?
EDIT 2. Someone suggested to replace the original problem with $$ \begin{aligned} \max_{x:=(x_1,\dots,x_n)^T\in\mathbb{R}^n} \sum_{i=1}^n x_i \\ \text{s.t.}\ \ G(x) \succeq 0 \end{aligned} $$
Which apparently is not necessarily equivalent to the original one but is stated as a classical LMI and is more suitable for me. Numerically I obtain that solutions to this new problem are solutions to the original one, but there might be some solutions missing I guess. Would this be true in general? What do we lose by studying this problem instead?