10

Current floating point (ANSI C float, double) allow to represent an approximation of a real number.
Is there any way to represent real numbers without errors?
Here's an idea I had, which is anything but perfect.

For example, 1/3 is 0.33333333...(base 10) or o.01010101...(base 2), but also 0.1(base 3)
Is it a good idea to implement this "structure":

base, mantissa, exponent

so 1/3 could be 3^-1

{[11] = base 3, [1.0] mantissa, [-1] exponent}

Any other ideas?

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122
incud
  • 551
  • 7
  • 18

8 Answers8

20

It all depends what you want to do.

For example, what you show is a great way of representing rational numbers. But it still can't represent something like $\pi$ or $e$ perfectly.

In fact, many languages such as Haskell and Scheme have built in support for rational numbers, storing them in the form $\frac{a}{b}$ where $a,b$ are integers.

The main reason that these aren't widely used is performance. Floating point numbers are a bit imprecise, but their operations are implemented in hardware. Your proposed system allows for greater precision, but requires several steps to implement, as opposed to a single operation that can be performed in hardware.

It's known that some real numbers are uncomputable, such as the halting numbers. There is no algorithm enumerating its digits, unlike $\pi$, where we can calculate the $n$th digit as long as we wait long enough.

If you want real precision for things irrational or transcendental numbers, you'd likely need to use some sort of system of symbolic algebra, then get a final answer in symbolic form, which you could approximate to any number of digits. However, because of the undecidability problems outlined above, this approach is necessarily limited. It is still good for things like approximating integrals or infinite series.

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122
20

There is no way to represent all real numbers without errors if each number is to have a finite representation. There are uncountably many real numbers but only countably many finite strings of 1's and 0's that you could use to represent them with.

David Richerby
  • 82,470
  • 26
  • 145
  • 239
8

Your idea does not work because a number represented in base $b$ with mantissa $m$ and exponent $e$ is the rational number $b \cdot m^{-e}$, thus your representation works precisely for rational numbers and no others. You cannot represent $\sqrt{2}$ for instance.

There is a whole branch of computable mathematics which deals with exact real arithmetic. Many data structures for representing exact real numbers have been proposed: streams of digits, streams of affine contractions, Cauchy sequences of rationals, Cauchy sequences of dyadic rationals, Dedekind cuts, sequences of shkrinking intervals, etc. There are implementations of exact real arithmetic based on these ideas, for instance:

Of these iRRAM is the most mature and efficient. Marshall in an experimental project, while the third one is a student project, but also the most easily accessible one. It has a very nice introduction explaining the issues regarding real number computation, I highly recommed that you look at it.

Let me make a remark. Someone will object that an infinite object cannot be represented by a computer. In some sense this is true, but in another it is not. We never ever have to represent an entire real number, we only need a finite approximation at each stage of the computation. Thus, we only need a representation which can represent a real up to any given precision. Of course, once we run out of computer memory we run out of computer memory -- but that is a limitation of the computer, not the representation itself. This situation is no different than many others in programming. For instance, people use integers in Python and they think of them as "arbitrarily large" even though of course they cannot exceed the size of available memory. Sometimes infinity is a useful approximation for a very large finite number.

Furthermore, I often hear the claim that computers can only deal with computable real numbers. This misses two important points. First, computers have access to data from the external world, so we would also have to make (the unverifiable) assumption that the external world is computable as well. Second, we need to distinguish between what reals a computer can compute, and what reals it can represent. For instance, if we choose streams of digits as the representation of reals then it is perfectly possible to represent a non-computable real: if someone gave it to us we would know how to represent it. But if we choose to represent reals as pieces of source code that compute digits, then we could not represent non-computable reals, obviously.

In any case, this topic is best tackled with some further reading.

Andrej Bauer
  • 31,657
  • 1
  • 75
  • 121
7

There are many effective Rational Number implementations but one that has been proposed many times and can even handle some irrationals quite well is Continued Fractions.

Quote from Continued Fractions by Darren C. Collins:

Theorem 5-1. - The continued fraction expression of a real number is finite if and only if the real number is rational.

Quote fromMathworld - Periodic Continued Fractions

... a continued fraction is periodic iff it is a root of a quadratic polynomial.

i.e. all roots can be expressed as periodic continued fractions.

There is also an Exact Continued Fraction for π which surprised m e until @AndrejBauer pointed out that it actually isn't.

OldCurmudgeon
  • 231
  • 1
  • 5
5

There are a number of "exact real" suggestions in the comments (e.g. continued fractions, linear fractional transformations, etc). The typical catch is that while you can compute answers to a formula, equality is often undecidable.

However, if you're just interested in algebraic numbers, then you're in luck: The theory of real closed fields is complete, o-minimal, and decidable. This was proven by Tarski in 1948.

But there's a catch. You don't want to use Tarski's algorithm, since it's in the complexity class NONELEMENTARY, which is as impractical as impractical algorithms can get. There are more recent methods which get the complexity down to DEXP, which is the best we currently know.

Note that the problem is NP-hard because it includes SAT. However, it's not known (or believed) to be in NP.

EDIT I'm going to try to explain this a little more.

The framework for understanding all of this is a decision problem known as Satisfiability Modulo Theories, or SMT for short. Basically, we want to solve SAT for a theory built on top of classical logic.

So we start with first order classical logic with an equality test. Which function symbols we want to include and what their axioms are determine whether or not the theory is decidable.

There are lots of interesting theories expressed in the SMT framework. For example, there are theories of data structures (e.g. lists, binary trees, etc) which are used to help prove programs correct, and the theory of Euclidean geometry. But for our purpose, we're looking at theories of different kinds of number.

Presburger arithmetic is the theory of natural numbers with addition. This theory is decidable.

Peano arithmetic is the theory of natural numbers with addition and multiplication. This theory is not decidable, as famously proven by Gödel.

Tarski arithmetic is the theory of the real numbers with all field operations (addition, subtraction, multiplication, and division). Interestingly, this theory is decidable. This was a highly counter-intuitive result at the time. You might assume that because it's a "superset" of the natural numbers it's "harder", but this isn't the case; compare linear programming over the rationals with linear programming over the integers, for example.

It may not seem obvious that satisfiabilty is all you need, but it is. For example, if you want to test whether or not the positive square root of 2 is equal to the real cube root of 3, you can express this as the satisfiability problem:

$$\exists x. x>0 \wedge x^2 - 2 = 0 \wedge x^3 - 3 = 0$$

So then there's the question as to what other operations you can add to Tarski arithmetic and still keep decidability. The next obvious things to add are elementary transcendental operations like $e^x$, and the trigonometric functions.

It turns out that the theory of reals with $\sin$ is undecidable, because $\{ \frac{x}{\pi} | \sin x = 0 \}$ is the integers. Given the reals and $\sin$, you can construct the theory of integers, and the theory of integers is undecidable.

Tarski conjectured that real fields plus $e^x$ is undecidable. I don't know if anyone has proven that it isn't decidable, but I know that nobdody has yet proven that it is decidable. He believed that the reason why this is likely is because of what happens in the complex plane; if you have $e^{ix}$, then you automatically have trigonometry.


Alfred Tarski (1948), A Decision Method for Elementary Algebra and Geometry.

Pseudonym
  • 24,523
  • 3
  • 48
  • 99
1

There is no way to represent a real number since they cannot really be represented in the real world. We can give approximations of them, arbitrary symbols for them like $\pi$, or ways to get them like $\sqrt2$, but there actual value can never be truly represented. You would have to start dealing with them purely symbolically and hope they factor out. However even them, there is no way to account for all of them since as David said, there is not enough strings to give these symbolic representations.

lPlant
  • 1,622
  • 9
  • 19
1

It's possible to represent a very large class of numbers called algebraic numbers exactly, by treating them as roots of polynomials.

This article as well as this mathoverflow question have more information, as well as many more resources about this topic. As far as I know, this method still fails to represent transcendental numbers such as $\pi$ or $e$, unless some generalization of it has been discovered. There are many other issues with this method, such as additional solutions to simple addition appearing out of "nowhere" (also known as the complex plane). It's a tradeoff, use your good judgement to tell if it's useful for you.

More Axes
  • 111
  • 1
-1

You cannot represent all real numbers in a computer, but you can represent many. You could use fractions which would represent more numbers than floats. You could also do more sophisticated things like representing numbers as a root of some polynomial with an approximation that under newtons method will converge to the number.

Alice Ryhl
  • 211
  • 1
  • 7