0

I want to minimize a function which has sharp gradients close to each local minimum. Due to process tolerances, I want to find solutions which meet some minimum criterion (e.g. lower than x), but have a shallow gradient near to the found minimum.

My question is: is this simply a case of me assigning a penalty factor to minima with sharp nearby gradients, or is there a class of algorithm that can handle this sort of constraint as part of its optimization routine?

Sean
  • 101
  • 1

2 Answers2

1

Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.

Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.

anymous.asker
  • 874
  • 4
  • 10
0

You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):

Stephen Rauch
  • 1,831
  • 11
  • 23
  • 34
Tuyen
  • 141
  • 1
  • 4