Consider the problem \begin{equation} \min_x f(x)~~~{\rm s.t.}~~~ g_i(x)\leq 0,~~i=1,\dots,I, \end{equation}
where $x$ is the optimization parameter vector, $f(x)$ is the objective function and $g_i(x)$ is the $i$th constraint function. $I$ denotes the number of constraints.
Consider now a point $x^\star$ that satisfies the Karush-Kuhn-Tucker conditions. In general, in non-convex optimization, a KKT point can be everything, i.e., a local optimum, a global optimum, a saddle point or even a maximum.
Can we claim that $x^\star$ is at least a local optimum if we assume that $f(x)$ is a strictly convex function and that $g_i(x)$ are non-convex functions? All functions are at least twice continuously differentiable.
I have read this claim in this paper: http://kang.nt.e-technik.tu-darmstadt.de/nt/fileadmin/nas/Publications/2012/Cheng_TSP_2012_Posted.pdf [Appendix C] but I'm not 100% sure if this is true. I also could not find any literature about it.