0

(Was: “Are there any (relatively) easy ways for a computer to test divisibility?”)

Given two distinct arbitrary positive integers:

  1. Is there a way to determine if the smaller value is a factor of the larger value, that is computationally better than full-blown division?

  2. Is there a way to determine the remainder after dividing the larger value by the smaller value, that is computationally better than full-blown division?

If only (2) is feasible, then (1) can be simulated by testing the result of (2) against zero.

Edit: I want to try Wilson’s Theorem, which uses a divisibility test to check if a number is prime or not. If there’s a way to not have to do an integer mod operation, and if it’s faster, I would want to use it to save time.

Edit 2: I keep getting duplicate flags versus Trick to find multiples mentally. But the two questions are working a different levels.

  • That question is working with human intuition, but I’m looking for possibilities that a computer device can do.
  • The answers to that question do lend to methods that a computer can handle, but they all use scratch work that involves MUL, DIV, and MOD operations. I’m asking for a MOD-without-DIV operation at the same level of computation as those previous operations, hopefully with less work that the chip’s internal DIV-and-MOD operation uses. That question provides answers that are inherently orders of magnitude more work. (That MOD-without-DIV operation would be part of the chip’s operations too, but we need theories on how MwD could be done before adding it to a chip design.)
CTMacUser
  • 211
  • 1
  • 7
  • 1
    (2) is basically the mod operator %, and then (1) is done by checking if a % b = 0 which would mean that b is a divisor of a. The mod operator for positive numbers is usually written with division internally, a - floor(a/b)*b so I don't think there is an "easier" way – user525966 Sep 26 '22 at 01:26
  • Maybe. ;) How big are these numbers? Are the 2 numbers completely arbitrary, or would you like to test a bunch of different numbers against the same divisor. A common approach is to convert the division to multiplication, see https://en.wikipedia.org/wiki/Division_algorithm#Fast_division_methods – PM 2Ring Sep 26 '22 at 01:53
  • As Wikipedia says, "In practice, Wilson's theorem is useless as a primality test". Not only is the modulus calculation slow, you need to do n-2 multiplications. There are much better primality tests, eg Miller-Rabin. Deterministic Miller-Rabin primality test in Python – PM 2Ring Sep 26 '22 at 03:07
  • @user525966, I know about the mod operator. Does it work by an algorithm that also discovers the quotient (and the function just drops)? I suspect the answer is “yes,” and I’m wondering if an algorithm exists where the answer would be “no” (but still practical to use). – CTMacUser Sep 26 '22 at 04:14
  • @PM2Ring, I was thinking of a generator that produces (2, yes), (3, yes), (4, no), etc. It would keep the previous step’s factorial computation in its state, so only 1 multiplication step per iteration. – CTMacUser Sep 26 '22 at 04:19
  • Ah, ok. But that's not practical either, because factorial grows so quickly, so you use a huge amount of RAM for even moderately small n. The most efficient prime generator uses a segmented sieve, but there are other algorithms that are fun to play with, see https://stackoverflow.com/a/10733621/4014959 – PM 2Ring Sep 26 '22 at 05:10
  • That Wilson's theorem is useless for a primality test was already mentioned. Whether $a$ divides $b$ requires just one division , a quite cheap thing. A shortcut is only possible , if there are huge powers involved determining $b$ and sometimes even necessary (for example for Fermat numbers which would be far too large for a direct calculation). – Peter Sep 26 '22 at 14:01

1 Answers1

2

This is a more interesting question that it first appears. State-of-the-art division algorithms use Newton's method to reduce division to multiplication. An upper bound on the time complexity of computing both the quotient $q$ and remainder $r$ in $a = bq + r$ is therefore $O(\log a \log \log a \log \log \log a)$ when using the Schönhage–Strassen algorithm, though I suspect that it's possible to do better if you know that $b$ is very small, i.e. $b \in O(\log a)$.

The optimal cost of division is an open problem and it's not obvious to me that computing only $r$ couldn't be done more cheaply than full division. On the other hand I'm not aware of more efficient specialized algorithms for general $a$ and $b$. Obviously there is a lower bound of $O(\log a)$ on both computing division and remainders.

user7530
  • 50,625