10

For years, I've been told (and I've been advocating) that problems which could be solved in polynomial time are "easy". But now I realize that I don't know the exact reason why this is so.

Intuitively, I can guess what the relation between P problems and easy problems is. Experience tells me that P problems are easy too. However, using only mathematical terms and theoretical arguments, why is P the class of easy problems?

jack
  • 101
  • 3

2 Answers2

8

I don't think it is possible to argue strictly with only mathematical or theoretical terms. Very much does come down to heuristics.
Of course an algorithm with runtime $O(n^{1000})$ or something similarly ridiculous would not be "easy" and in most practical cases worse than an algorithm of modest exponential growth. Yet such algorithms just don't happen in practice. And if an algorithm initially has a quite high runtime, then more often than not it can be improved dramatically, which means that the problem is a lot less "difficult".
As an example, take the AKS algorithm (which tests whether a given number is prime). It started out as $O(n^{12})$ (if I recall correctly), being one of the polytime-algorithms with the greatest exponent. But it was improved to $O(n^{6+\epsilon})$ in a quite short time. And very fast probabilistic algorithms for the primality problem exist as well, which shows that proving primalty is a lot easier than, say, your favorite NP-complete problem (assuming $P\neq NP$).

Of course the above could all turn out to be completely wrong. Maybe there exists a polytime algorithm for every NP complete problem which does not have a "practical" exponent. Knuth advocates that scenario; you might want to read question 17 here.

john_leo
  • 1,911
  • 14
  • 26
1

I most often see complexity applied to optimization problems. There are then two measures for the "size" of a problem: the number $n$ of variables for which optimal values must be determined, and $L$, the number of bits used to represent the problem instance.

Then a problem in P has operation count that is bounded above by a polynomial in $n$ and $L$, in other words, there exist constants $c$, $j$ and $k$ such that, for $n$ and $L$ sufficiently large, the number of operations needed to solve the problem is less than or equal to $cn^jL^k$. On the other hand, there are provably exponential problems (this does not, at present, include NP), for which there exist problems for which the operation count exceeds $Ce^{L+n}$. If $n$ and $L$ are large enough, this will dominate $cn^jL^k$. Of course, $j$ and $k$ may be large, e.g., $j=1000$, in which case this is a theoretical distinction only.

However, experience has shown that once a problem or problem class was shown to be in P, the exponents were quickly lowered. For instance, in 1979 Khachian described an algorithm to solve linear programs that had complexity O($n^6L^2$). Just five years later Karmarkar lowered that to O($n^{3.5}L^2$). Khachian's method is not really useful for computations, but Karmarkar's method is. Hence there is a general expectation that once a problem has been proven to be in P, even with a large exponent on $n$, algorithmic improvements and new approaches will lower the exponent. In addition, no problem in P has been identified to date where the exponent truly was large. (I seem to remember reading about problems for which $j=12$.)

user3697176
  • 111
  • 1