6

There's tons of research on program transformations for optimization. Is there any research on transformations that improve numeric stability? Examples of such transformations might include:

  • Transform $\log(\exp(a)+\exp(b))$ into $\max(a,b)+\log(\exp(a-\max(a,b))+\exp(b-\max(a,b)))$
  • Convert multiplication of an inverse matrix times a vector into the solution to a linear system solver.
  • Automatically perform multiplications of small numbers in the log domain.

All the tricks I'm aware of for better numeric stability like this are pretty standard and something that every "good" numeric programmer always does. Since the tricks are so standard and always applied, it makes sense that the compiler might be able to do them for us.

Mike Izbicki
  • 444
  • 2
  • 9

2 Answers2

8

There actually is some research on improving the numerical stability of floating point expressions, the Herbie project. Herbie is a tool to automatically improve the accuracy of floating point expressions. It's not quite comprehensive, but it will find a lot of accuracy improving transformations automatically.

Cheers,

Alex Sanchez-Stern

3

A quite interesting piece of work is a stochastic approach of Schkufza, Sharma and Aiken. Note that in general, the code is not guaranteed to be correct, but they give a really nice argument for probabilistic correctness. Stochastic Optimization of Floating-Point Programs with Tunable Precision, PLDI 2014.

Edit: The above work is designed to optimize for speed while keeping (stochastic) correctness. It may be possible to use it for the opposite purpose though (see my comment below).


I just thought of some work more related to Alex Stern's reference, by Thomas Wahl and Jaideep Ramachandran that can be found here.

cody
  • 8,427
  • 33
  • 64