0

Suppose I have a 32-bit integer $x$, I want to find $\{ x_i \}_{i \in 1\dots\ell}$ such that $x = e + \sum_{i=1}^\ell x_i \cdot 2^{32 - B\cdot i}$ where the error $e$ is as small as possible. The parameter $\ell$ is the level of the decomposition and $B$ is the base of the decomposition. Increasing $\ell$ and decreasing $B$ will result in a more accurate decomposition.

Is there an algorithm that solves this kind of problem?

Leo B.
  • 133
  • 5
lamba
  • 109
  • 1

1 Answers1

0

Assuming that $B$ and the $x_i$ are integers, you can recast the question as

$$\frac x{2^{32}}\approx\sum_{i=1}^l x_iP^i$$ where $P:=2^{-B}$, and you look for the best approximation of $\dfrac x{2^{32}}<1$ in base $2^B$ using $l$ digits.

Just write the binary representation of that number and get the digits in $l$ slices of $B$ bits.

E.g. $\frac{123456789_d}{2^{32}}=0.00000111010110111100110100010101_b=0.01655715052_o$ ($o$ for octal) gives all solutions for $B=3$.