1

In GNU Octave, num2str(pi,"%.20f") prints 3.14159265358979311600. I understand that only the first 15 decimal places are significant. What I don't understand is why there are any additional decimal places after, say, the 16th. Where do these additional digits come from?

It seems like the machine would need more than 53 bits to obtain these extra digits. I understand that additional bits (guard bits, etc.) are stored for rounding purposes, but are there additional digits stored as well?

4 Answers4

4

The best double-precision floating point approximation to $π$ is $884279719003555\cdot 2^{-48}$ = 3.141592653589793115997963468544185161590576171875. Sixteen or seventeen decimal places are enough to represent any finite double-precision value in the sense of being closer to it than to any other double-precision value, but not enough to represent every double-precision value exactly.

benrg
  • 2,511
  • 6
  • 13
3

Where do these additional digits come from?

They are basically excess precision. They are an artifact of binary-to-decimal conversion.

It is a curious fact about binary-to-decimal conversion that if you have N bits of fractional precision, and even though those bits theoretically represent roughly N/3 decimal digits of precision, it will actually take a full N decimal digits to exactly represent them. For example, the 4-bit binary fraction 0.1101, or 13/16, in decimal is 0.8125, which has 4 digits. So if you have 53 bits of precision available, it might take 53 decimal digits to represent them (or more, for negative exponents).

It's easy to see why this happens. Start with 1/2 = 0.510 = 0.12. Divide it by 2: you get 0.012 which obviously has one more binary bit, but you also get 0.2510 which has one more decimal digit. Divide it by 2 again: you get 0.0012 and 0.12510. Do it again: 0.00012 and 0.062510. There's always a 5 at the end of the decimal expansion, so it's never even, so dividing it by 2 always requires an additional digit (which is always 5 again), so you always get one more binary bit and one more decimal digit.

So if you take a 53-bit binary number like 11.001001000011111101101010100010001000010110100011000, a decent 16-digit decimal approximation of it is 3.141592653589793 (which looks a lot like pi), but an exact decimal representation of it is 3.141592653589793115997963468544185161590576171875, which not coincidentally starts to depart from pi after the 16th digit.

The other way to convince yourself that those extra digits don't represent real precision is to try adding 1 to the last digit. If a 53-bit binary number really had 53 decimal digits of precision, you could add 1 in the last place and get 3.141592653589793115997963468544185161590576171876. But you can't. The next representable 53-bit number actually works out to 3.141592653589793560087173318606801331043243408203125, which (again not coincidentally) starts to diverge after the 16th decimal digit.

Steve Summit
  • 171
  • 4
2

Instead of pi, use 1.0/3.0 or 3.0/7.0. Then you figure it out.

gnasher729
  • 32,238
  • 36
  • 56
1

Floating point numbers are stored in binary form but you print them decimal. I guess, your confusion might come from the observation of integers, where $n$ binary digits is approximately $n/\log_2 10$ decimal ones.

This is not like this when we consider fractions, e.g. $0.125_{(10)}$ is $0.001_{(2)}$ and $0.015625_{(10)}$ is $0.000001_{(2)}$.

Another story is when we talk about precision, then yes, you can count digits to get a representation close to the value stored in binary.