16

Wikipedia lists the time complexity of addition as $n$, where $n$ is the number of bits.

Is this a rigid theoretical lower bound? Or is this just the complexity of the current fastest known algorithm. I want to know, because the complexity of addition, underscores all other arithmetic operations and all algorithms which use them.

Is it theoretically impossible, to get an addition algorithm that runs in $o(n)$? Or are we bound to linear complexity for addition.

Tobi Alafin
  • 1,647
  • 4
  • 17
  • 22

5 Answers5

21

If your algorithm uses asymptotically less than $n$ time, then it does not have enough time to read all the digits of the numbers it is adding. You are to imagine you are handling very large numbers (stored for example in 8MB text files). Of course, addition can be done very quickly compared to the value of the numbers; it runs in $\mathcal{O}(\log(N))$ time, if $N$ is the value of the sum.

This does not mean you can speed things up a little; if your processor handles 32 bits each operation, then you use $\frac{n}{32}$ time, but that is still $\mathcal{O}(n)$ and not $o(n)$.

Lieuwe Vinkhuijzen
  • 4,457
  • 18
  • 28
7

In order for complexity analysis to make any formal sense at all, you have to specify a formal computational model within which the algorithm in object is being executed, or, at the very least, a cost model, which specifies what the basic operations are and their costs.

In most contexts, arithmetic operations are assumed to take $\Theta(1)$ time. This is usually reasonable, as we are interested in algorithmic complexity irrespectively of the numbers involved. This is called the uniform cost model.

If numbers can grow unbounded, or we are interested in analyzing the operations themselves, arithmetic operations are taken to have cost $\Theta(|x|)$, proportional to the size of input.

Now, can operations have a cost that's less than that? Possibly, however you'll have to formally define a computational model in which it can happen.

quicksort
  • 4,272
  • 1
  • 10
  • 21
5

The input to addition is two arbitrary numbers. Since they are arbitrary, you must read each bit, and therefore the algorithm is $\Omega(n)$.

Imagine your algorithm successfully adds 1010100110 and 0010010110 without reading each bit. In order for you algorithm to be able to add arbitrary inputs, I should be able to randomly flip any one of these bits, and the algorithm still output a correct (but different) addition. But if your algorithm doesn't read every bit, how could it tell that the flipped input was any different than the original input?

David Richerby
  • 82,470
  • 26
  • 145
  • 239
4

When the time complexity of a computation such as adding two $\lg n$-bit numbers $x$ and $y$ is considered, it is often assumed that the bits in $x$ and $y$ are available all at once unless the algorithm in question is bit-serial and bits of $x$ and $y$ arrive over time. So, while it is true that every bit counts and we can't ignore any given bit, one doesn't need to spend $O(\lg n)$ time to wait for bits of $x$ and $y$ to arrive before the computation begins or in-between the successive bit additions. With this convention in place, $x$ and $y$ can be added using a Brent-Kung prefix adder in $O(\lg\lg n)$ time complexity using constant fan-in and constant-fan-out gates. Brent-Kung uses a particular prefix gate with two inputs and two outputs and $O(1)$ gate delay to achieve this time complexity.

One other point is that time and space complexities of an algorithm or computation cannot not be expressed in terms of implementation specific figures such as address or data bus widths, RAM or register size (number of bits, bytes, words), clock rates, etc of a particular physical system made out of a constant number of components. Such complexities involve variables such as $n,\lg n,$ etc, whereas an implementation of an algorithm on a specific piece of hardware or system can be determined down to a nanosecond if all system and component time and space values are accurately predictable. As far as time and space complexities are concerned, all physical systems built out of a constant number of components have $O(1)$ time and $O(1)$ space complexity.

The classic references on this sort of question are:

(1) Winograd, Shmuel. "On the time required to perform addition." Journal of the ACM (JACM) 12.2 (1965): 277-285.

(2) Brent, Richard P., and Hsiang T. Kung. "A regular layout for parallel adders." IEEE transactions on Computers 3 (1982): 260-264.

The $O(\lg \lg n)$ time complexity of B-K prefix adder is consistent with Winograd's lower bound given in [1].

AYO
  • 41
  • 2
0

To extend on other answers: When we are interested in average-case time complexity, it is possible to get an addition algorithm that adds in $\log n$ steps in the average case (assuming certain bitwise operations are $O(1)$), see [1] and [2]. And modern computer architectures add in parallel, so the number of steps to add two $n$-bit numbers using a hardware adder is almost always better than $O(n)$, assuming we have polynomially many processors; see AYO's answer, for example.

[1]: Richard Beigel, William Gasarch, Ming Li, and Louxin Zhang. "Addition in $\log_2 n+O(1)$ Steps on Average: A Simple Analysis"

[2]: G. Schay, "How to add fast–on average". American Mathematical Monthly, 102:8 (1995), 725-730.