8

Can you explain what is finite precision? Why is finite precision a problem in machine learning?

GeorgeOfTheRF
  • 2,078
  • 5
  • 18
  • 20

2 Answers2

4

Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $\pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.

http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/

To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.

These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).

http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf

https://en.wikipedia.org/wiki/Numerical_stability

In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.

https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes

1

One single simple example: Vanishing Gradient problem in Deep Learning. It's not mainly a finite precision problem, but that is also part of the problem.

Martin Thoma
  • 19,540
  • 36
  • 98
  • 170