2

Consider an integer ( of arbitrary length ). To find the number of digits it has, here is a known simple algorithm

count = 1;
while ( (value = value/10) )    count++;

Now, what is the time cost of this algorithm? If we assume that k is the number of bits ( not digits) required to represent the number, then it is O(k). But this algorithm doesn't have a clean linear behavior. That is, if I double the the number of bits in the integer, the number of iterations of while loop doesn't necessarily double every time. An increase of 3-4 bits in the input increases the iteration count by 1.

I know that the number of digits in a number N is floor(log10(N)) + 1 but then we wouldn't be expressing the runtime in terms of number of bits, which is what I need.

In such cases how to calculate the runtime precisely?

Raphael
  • 73,212
  • 30
  • 182
  • 400

1 Answers1

2

There are two answers here:

  1. $\log_{10} n = \log_{10} 2 \log_2 n$, and $\log_2 n$ is (almost) the length of $n$ in bits.

  2. We don't care about the exact complexity. The point of big O notation is to hide multiplicative constants. A function is $O(\log_2 n)$ iff it is $O(\log_{10} n)$. Moreover, the exact time complexity depends on the exact computation model used (i.e. compiler, CPU, operating system and environment – presumably this code snippet runs on actual hardware and its execution time is measured in seconds), which you haven't specified anyway. Again, the point of big O notation is that these details don't matter since they only affect the complexity by a multiplicative constant.

Another point which you should consider is that in practice, either value has a fixed length, or it is variable length (bignum). If it has fixed length (say it's a 32 bit integer), then the correct parameter here is not the length (which is constant) but the value itself. If it has variable length, then the division instruction cannot be reasonably assumed to take constant time.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514