4

The derivative of a function's inverse is well understood, and it is explained in full detail here.

Say I have a function $\varphi_\varepsilon:\mathbb{R}\to\mathbb{R}$, where $\varepsilon$ is a parameter. I also know that the function is smooth, invertible, and has smooth inverse. This leads us to the question: do we have a representation for $\frac{\partial}{\partial\varepsilon}\varphi_\varepsilon^{-1}$ that is similar to the single variable case?

Here is a post that already addresses this question, but it is quite old and doesn't have any significant responses. I'm also more interested in the theoretical case than a concrete example.

  • If $\epsilon$ is a real variable, then $\phi$ is really a function $\mathbb R\times\mathbb R\to\mathbb R$. So you want to assume for each $\epsilon$ $\phi_\epsilon$ is smooth and invertible in which case the one dimensional theorem applies. I'm not sure what you mean by "representation" here. – Gregory Grant Nov 22 '15 at 00:34
  • In saying "a representation," I'm referring to something like the formula $$ (f^{-1})^\prime(x)=\frac{1}{f^\prime(f^{-1}(x))}. $$ Is it really as simple here as replacing $f$ with $\varphi_\varepsilon$ and making the derivatives in terms of $\varepsilon$? – AegisCruiser Nov 22 '15 at 00:40
  • Yes I think in that generality it's just as simple as parameterizing $\phi$ by $\epsilon$ in the statement of the 1-dimensional formula. – Gregory Grant Nov 22 '15 at 00:41

2 Answers2

3

After some further investigation, it appears we can't actually get away with what's stated in the comments.

Note that for a function $f$ of two variables $x$ and $y$, if $y$ is also a function of $x$, we have that $$ \frac{df}{dx}(x,y(x))=\frac{\partial f}{\partial x}(x,y(x))+\frac{\partial f}{\partial y}(x,y(x))\frac{dy}{dx}(x). $$ It is worth pointing out here that there is a significant difference between $\frac{df}{dx}$ and $\frac{\partial f}{\partial x}$. The former is the total derivative of $f$ when we make $y$ dependent upon $x$, whereas the latter is the partial derivative of $f$ with respect to its "first" input variable.

Now say that $s$ and $t$ are the variables for the domain and codomain of $\varphi_\varepsilon$, respectively. That is, $\varphi_\varepsilon(s)=t$. As our function is invertible, we may write $$ \varphi_\varepsilon(\varphi_\varepsilon^{-1}(t))=t. $$ Differentiating both sides of this equation with respect to $\varepsilon$ then yields something similar to the above statement with $f$: $$ \frac{\partial\varphi_\varepsilon}{\partial\varepsilon}(\varphi_\varepsilon^{-1}(t))+\frac{\partial\varphi_\varepsilon}{\partial s}(\varphi_\varepsilon^{-1}(t))\frac{\partial\varphi_\varepsilon^{-1}}{\partial\varepsilon}(t)=0. $$ We can then solve this for $\frac{\partial\varphi_\varepsilon^{-1}}{\partial\varepsilon}(t)$, which yields $$ \frac{\partial\varphi_\varepsilon^{-1}}{\partial\varepsilon}(t)=\frac{-\frac{\partial\varphi_\varepsilon}{\partial\varepsilon}(\varphi_\varepsilon^{-1}(t))}{\frac{\partial\varphi_\varepsilon}{\partial s}(\varphi_\varepsilon^{-1}(t))}. $$

0

I came across this question myself as well, and while the original answer gave an equation for the derivative, it does not give the condition for the existence of this derivative. To address this we use the implicit function theorem.

For all $t \in \mathbb{R}$, define $f_t: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ by $$f_t(\varepsilon, s) = \varphi_\varepsilon(s) - t$$

Note that $\varphi_\varepsilon^{-1}(t)$ satisfies $$f_t(\varepsilon, \varphi_\varepsilon^{-1}(t)) = 0$$

Given some $\varepsilon_0 \in \mathbb{R}$, suppose $f_t$ is continuously differentiable and $\partial_s f_t(\varepsilon_0, \varphi_{\varepsilon_0}^{-1}(t))$ is non-zero. Note that of course this requires that the partial derivative $\partial_\varepsilon \varphi_\varepsilon$ exist. Then by the implicit function theorem, there exists an open set $U \subset \mathbb{R}$ containing $\varepsilon_0$ and a unique, continuously differentiable function $g_t: U \to \mathbb{R}$ such that $$g_t(\varepsilon_0) = \varphi_{\varepsilon_0}^{-1}(t)$$ $$f_t(\varepsilon, g_t(\varepsilon)) = 0$$ for all $\varepsilon \in U$, with derivative given by $$g'_t(\varepsilon) = -\frac{\partial_\varepsilon f_t(\varepsilon, g_t(\varepsilon))}{\partial_s f_t(\varepsilon, g_t(\varepsilon))} = -\frac{\partial_\varepsilon \varphi_\varepsilon(g_t(\varepsilon))}{\partial_s \varphi_\varepsilon(g_t(\varepsilon))}$$ In particular, plugging in $\varepsilon = \varepsilon_0$ we have $$g'_t(\varepsilon_0) = -\frac{\partial_\varepsilon \varphi_\varepsilon(\varphi_{\varepsilon_0}^{-1}(t))}{\partial_s \varphi_\varepsilon(\varphi_{\varepsilon_0}^{-1}(t))}$$ which is the same equation as in the original answer, but with precise conditions on its existence.