2

I know that there are some problems that are very hard to solve in general, but become much easier and asymptotically faster if restricted to only integer values.

One such example would be sorting which can be done in O(n+M) if only integer values are allowed.

Another example (I think) is the undirected single source shortest path problem which seems to be solvable in linear time if all weights are positive integers.

My question now is which problems are significantly easier to solve if restricted to integer values (and why?)

Note: I obviously don't expect a complete listing, I just want to understand the general principle behind the speed up by restricting the problem space to integers.

JeD
  • 198
  • 6

2 Answers2

3

There's no single answer.

  • Some algorithms are faster when dealing with small integers, because you can use small integers as the index into an array.

  • Some algorithms are faster when dealing with positive integers, because the set of positive integers has the property that it does not contain any infinite decreasing chains: any set of positive integers has a minimum.

  • Some algorithms are faster when dealing with integers because they assume a computational model where you can add or multiply integers in $O(1)$ time. It is dubious whether these algorithms are truly faster in practice, as that assumption isn't literally true, but they might be.

  • Some algorithms are faster (or simpler) when dealing with integers because integers can be not only compared, but also hashed, and the ability to hash items increases the range of options available to the algorithm designer. See the nuts-and-bolts problem for an example vaguely of this form (there are probably better examples).

There are probably many other principles I'm not thinking of right now.

Most of the cases I know of involve only a $O(\lg n)$ speedup. However, be warned that asymptotic $O(\lg n)$ speedups are not very robust, for several reasons. Just because algorithm A is asymptotically faster than algorithm B by a factor of $\Theta(\lg n)$ doesn't mean it'll be faster in practice; it might be slower in practice, because of constant factors or because of considerations ignored in the theoretical analysis (e.g., caching effects, the memory hierarchy, and so on). Also, often the $O(\lg n)$ speedup results you see in this area will be dependent on a particular computational model and might not carry over to other computational models.

So, it's not clear how much weight you should put on this, if you are motivated by practical concerns.

See also:

D.W.
  • 167,959
  • 22
  • 232
  • 500
1

An example: Multiplying large polynomials. If two polynomials have integer coefficients, then the product has integer coefficients. Therefore an FFT can be used to calculate the product, and if we just make sure that the rounding error in each coefficient is less than 1/2, then we can round all the coefficients to the nearest integer, and get the exact result. With real coefficients, we cannot get rid of the rounding errors and have to use other methods to get a precise result.

gnasher729
  • 32,238
  • 36
  • 56