Let’s say I have a function $f(p,q):R^{n+m}→R$, with $p∈R^n$ and $q∈R^m$. I have a set of $q_{i=1,…,k},y_{i=1,…,k}$ and I want to find $p$ so I use the Levenberg-Marquardt algorithm to resolve the minimization problem: $${\min_{p}\sum _{i=1}^{k}[y_{i}-f(q_{i}|{\boldsymbol {p}})]^{2}\,}.$$
At the end I get:
- My parameters $p$.
- The Jacobian of $f$.
- The Hessian of $f$.
What I’m searching for is the inverses Jacobians and Hessians, like how can I find:
$${\displaystyle J(f^{-1})={\begin{pmatrix}{\dfrac {\partial p_{1}}{\partial y_{1}}}&\cdots &{\dfrac {\partial p_{1}}{\partial y_{k}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial p_{n}}{\partial y_{1}}}&\cdots &{\dfrac {\partial p_{n}}{\partial y_{k}}}\end{pmatrix}},}$$
and
$${\forall i=1,...,k, \displaystyle H_{i}(f^{-1})={\begin{pmatrix}{\dfrac {\partial p_{i}}{\partial y_{1}\partial y_{1}}}&\cdots &{\dfrac {\partial p_{i}}{\partial y_{1}\partial y_{k}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial p_{i}}{\partial y_{k}\partial y_{1}}}&\cdots &{\dfrac {\partial p_{i}}{\partial y_{k}\partial y_{k}}}\end{pmatrix}},}$$
directly from what was computed in the first place?
Not sure for the Hessian, but at least for the Jacobian, I know the inverse function theorem states:
$$J(f^{-1})=\left[J(f)\right]^{-1}.$$
But how can I “trick” this problem to get a squared, invertible Jacobian?