10

I was looking at this reading of MIT on computational complexity and on minute 15:00 Erik Demaine embarks on a demonstration to show what is stated in the title of this question. However I cannot follow his reasoning, in practice what he says is this:
we can state a decision problem as string of $0$ and $1$ which in practice is the truth table of the function.
He goes on to say that a decision problem is an infinite string of bits while a program is a finite string of bits and up to here no problem. What I don't understand is the continuation of the proof from this point on: Decision problems are in $\mathbb{R}$ cause you can put a decimal point before the string representing the problem, thus obtaining the decimal part of a real

 for example if you have 0111011010101010101... it could become x.0111011010101010101... 

A program is "just" an integer in $\mathbb{N}$ cause it is a finite string of bits. The point that I fail to understand is how it is possible that a decision problem is comparable to a real number instead of an integer ... I mean, if we use the argument of "put a dot in front of the number" could not the same reasoning also be applied to the number of possible algorithms that can ever be produced?

Raphael
  • 73,212
  • 30
  • 182
  • 400
Yamar69
  • 1,074
  • 7
  • 11

3 Answers3

13

Reformulating in a more mathematically precise way, what the lecturer is trying to say is this: Any algorithm can be (uniquely) encoded as a finite string of bits, and any finite string of bits (uniquely) encodes a program; hence, there is a bijection between $\mathbb{N}$ and the set of algorithms, so both are countable sets. Conversely, having fixed an ordering of strings, any decision problem $P$ can be (uniquely) encoded as an infinite string of bits, where the $i$-th bit represents whether the $i$-th string is in $P$ or not, and any infinite string of bits (uniquely) encodes a decision problem (in the same fashion); hence, $\mathbb{R}$ and the set of decision problems are both uncountable sets.

dkaeae
  • 5,057
  • 1
  • 17
  • 31
11

If I understand you correctly, your question is — why a solution can be encoded by a natural number and a problem with a real number. (I assume that you understand the next phase of the proof which is based on the difference between sets of cardinality $\mathfrak c$ and $\aleph_0$.)

The reason lies in set theory, more specifically in the cardinality of different sets. Count the number of programs — it is the size of the different strings of a specific language or set of characters (ASCII for example). This size is equivalent to the size of the set $\mathbb{N}$ (natural numbers). (Each string can be represented as its value that is calculated by its binary represantation.)

But, counting the number of functions from the natural numbers (or strings that represents them) to $\{0,1\}$, well that is a whole different story, and here we are dealing with differences in size between two infinite sets; the size of this set is larger. There is a nice proof that is based on the fact that the functions from $\mathbb N$ to all of the mentioned above sets cannot be "onto", which leads to the cardinality difference conclusion. You can read the proof here.

Mees de Vries
  • 353
  • 2
  • 16
royashcenazi
  • 576
  • 5
  • 8
6

Every algorithm can be described by a finite string, and so there are only countably many algorithms. In contrast, we can describe every decision problem as an infinite decimal in base 2, and moreover this is a surjective mapping: every number in $[0,1]$ can be "decoded" into a decision problem. Therefore there are uncountably many decision problems.

The decoding argument doesn't work for algorithms — while every algorithm corresponds to a finite decimal, this doesn't cover all of $[0,1]$, but only a countable subset of it.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514