2

In the article http://floating-point-gui.de/errors/comparison/, a method for comparing floating-point numbers is suggested:

There is an alternative to heaping conceptual complexity onto such an apparently simple task: instead of comparing a and b as real numbers, we can think about them as discrete steps and define the error margin as the maximum number of possible floating-point values between the two values.

This is conceptually very clear and easy and has the advantage of implicitly scaling the relative error margin with the magnitude of the values. Technically, it’s a bit more complex, but not as much as you might think, because IEEE 754 floats are designed to maintain their order when their bit patterns are interpreted as integers.

So instead of some complex variant of abs(b - a) < ε, we interpret a and b as integers and find abs(b - a) < ε(steps(a, b)) where ε is somehow variable with regards to the density of floating-point numbers in the vicinity of a and b. But how exactly?

Also, does this method really alleviate checking for NaNs and Infs? Or are some of these "heaps of conceptual complexity" inevitable if you want a solid, generic method of floating-point comparison?

The following C code produces 1677722:

#include <stdio.h>

int main(int argc, char *argv[]) { double x, y, z, a, b; int steps;

sscanf(&quot;0.1&quot;, &quot;%f&quot;, &amp;x);
sscanf(&quot;0.2&quot;, &quot;%f&quot;, &amp;y);
sscanf(&quot;0.15&quot;, &quot;%f&quot;, &amp;z);

a = x + y;
b = z + z;
steps = *(int*)&amp;b - *(int*)&amp;a;

printf(&quot;%d\n&quot;, steps);

}

Edit: Rephrased so the word "discrete" is not misused. Rewrote "steps(a, b) < ε(a, b)" into "abs(b - a) < ε(steps(a, b))" to indicate that it may still just be floating-point operations, but that ε is chosen variably.

sshine
  • 519
  • 3
  • 17

3 Answers3

4

There are a few fine tricks in the IEEE P754 format, which allows the use of integer operations for comparisons, or for rounding... It is useful for hardware implementations, for CPUs without an FPU, for some optimized libraries that need fast comparisons...

Caveats: - The sign bit (MSB) must be handled separately. - NaNs must be checked.

When comparing 0, denormals, normals or infinites of the same sign, integer comparisons work for floating point numbers.

Grabul
  • 1,900
  • 10
  • 12
3

If you graph $y = float(x)$, that is, the float value obtained by interpreting/casting the integer x bitwise as a float, you get an exponential curve approximated by a piecewise linear function, as the value of y increases in fixed steps of $2^{exponent}$ for each range where $exponent$ is constant. So the density of values undergoes discontinuous jumps as you move along x, but it approximates an exponential ($1/y$) decline in density as x increases.

So what the author is suggesting is to scale the acceptable error approximately by using a fixed difference $dx$ in x, ie the number of discrete values between two comparands. If $dx$ falls within one line segment, this allowed error is exactly $dx * 2^{exponent}$ for the local exponent (it's slightly more complex if it crosses two or more segments), and it scales up as $exponent$ increases. So we get a kind of auto-adapting error that scales approximately with the inverse of the local density.

KWillets
  • 1,274
  • 8
  • 9
-2

Answer:

I think what this is describing is a matter of scaling.

Say you are comparing two floating-pointnumbers with a precision to two decimal places.

If you multiply both by 100 you have raised their values by two orders of magnitude, and end up with integers to compare.

This can be computationally more efficient, since every flop takes more time than an intop.

SDsolar
  • 133
  • 1
  • 1
  • 9