I have a matrix $A$ whose entries are each a function of a variable $\epsilon$, with $\epsilon>0$. This matrix arises from Radial Basis Function (RBF) interpolation, and is symmetric positive-definite.
I will write this as $A(\epsilon)$. My goal is to find a value $\epsilon*$ that sets the condition number of the matrix to a specific target. In other words, if $\kappa(A)$ is the condition number of $A$, I need to ensure that $\kappa(A) = \kappa_T$, where $\kappa_T$ is some target condition number. To do this, I numerically solve the equation \begin{align} log\left(\frac{\kappa\left(A(\epsilon)\right)}{\kappa_T} \right) = f(\epsilon) = 0. \end{align} for $\epsilon$. In Matlab, I currently do this with the in-built fzero(), which uses the Brent-Dekker method. This method converges pretty slowly; for small dense matrices of size $50 \times 50$, for example, fzero sometimes takes 30+ iterations.
However, it occurred to me that I could possibly do this with Newton's method, if only I could compute the derivative of the above quantity. If $\epsilon$ is a scalar, and $A_{ij} = \frac{1}{\sqrt{1 + (\epsilon r_{ij})^2}}$, where $r_{ij}$ is some scalar independent of $\epsilon$, how do I compute the derivative $\frac{\partial f}{\partial \epsilon}$?
I have no idea how to compute the derivative of the condition number of a matrix, since I am not really familiar with matrix calculus. I'd appreciate some help! Thanks!