69

I have a high-level understanding of the $P=NP$ problem and I understand that if it were absolutely "proven" to be true with a provided solution, it would open the door for solving numerous problems within the realm of computer science.

My question is, if someone were to publish a indisputable, constructive proof of $P=NP$, what are some of the immediate effects that we would see of such a discovery?

I'm not asking for opinionated views of what the world would look like in 5-10 years. Instead, it is my understanding that this is such a fundamentally unsolvable problem that it could radically change the way we compute... many things (yeah, this is where my ignorance is showing...) that we can't easily calculate today.

What kind of near-immediate effect would a thorough, accurate, and constructive proof of $P=NP$ have on the practical world?

RLH
  • 869
  • 1
  • 8
  • 10

9 Answers9

38

People have given good answers assuming that $P=NP$ with some really large constant. I'm going to play the optimist and assume that we find a proof of $P=NP$ with a tractably small constant. Perhaps not likely, but I'm going to try to give some insight into what sorts of things would happen if we could efficiently solve all $NP$ problems.

  • Compilers: Some computer programs would get slightly faster, since compilers use graph coloring for register allocation. We would be able to allocate for large numbers of registers exactly. Existing compilers using an approximate solution (like chordal graphs) would get better output, and those using an exact solution would get faster.

  • Facility location: Businesses would be able to find the optimal place to place factories and supply depots to ship to their stores, when there are possibly thousands of stores and factories. Would likely not be a huge improvement over modern approximations, but would reduce costs.

  • Buying plane tickets: airline tickets are weird since they don't follow triangle equality. Sometimes it's cheaper to fly from A -> B -> C than directly from A -> C, something that doesn't come up when modelling distances. It would be easy to make a website that finds the absolute cheapest sequence of flights that visit some number of cities and starts and ends in your hometown.

  • Circuit design: electrical circuits on a chip are basically Boolean formulas. Things like minimization could be efficiently calculated, so our hardware would get a bit more efficient.

  • Scheduling: mad that your school put two of your exams on the same time? If $P=NP$ your school could either how many timeslots they need so no student has a conflict, or given a number of time slots, minimize the number of conflicts.

This is just a sampling of practical applications that we'd see if $NP$-completeness weren't a barrier. I'm sure I've missed many, but if the given construction had a good constant, the implications would be far reaching.

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122
37

We won't necessarily see any effects. Suppose that somebody finds an algorithm that solves 3SAT on $n$ variables in $2^{100} n$ basic operations. You won't be able to run this algorithm on any instance, since it takes too long. Or suppose that she finds an algorithm running in $n^{100}$ basic operations. We will only be able to use it on 3SAT instances on a single variable, since for more variables it takes too long.

On the other hand, suppose that P$\neq$NP, and that even the stronger exponential time hypothesis holds. Then in general, 3SAT should be untractable. Yet SAT solvers seem to be doing well on certain problems.

What's happening here? There are several problems with the P vs. NP question:

  1. It only concerns the worst case.
  2. It is only asymptotic.
  3. All polynomial time bounds are the same.

These problems cast doubt on its relevance to the real world. Now it could happen that some really fast algorithm is found for 3SAT, so fast that even symmetric encryption would become breakable. But I consider this highly unlikely. On the other hand, it is perfectly consistent for P to be different from NP while factoring being practical; that would break certain public key encryption schemes. This is a likely situation which would have repercussions, but it is unrelated to the P vs. NP question.

The P vs. NP question might be natural from a mathematical point of view, but its practical relevance is doubtful, in my view. Research on the question, on the other hand, might or might not have practical repercussions; it is not guided by this aspect.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
19

A very nice read here is [1], where Impagliazzo considers five possible "worlds" where relationships between complexity classes are different. For instance, in a world called Algorithmica (see Section 2.1), we have that $\sf P = NP$ (or some other "moral equivalent" holds, such as $\sf NP \subseteq BPP$).

In Algorithmica, virtually any optimization problem would be trivial. Programming languages could be languages where one specifies the properties a desired output should have in relation to the input, instead of specifying how computation should be performed. Computers could also find proofs for any theorem in time roughly the length of the proof. (This view is very optimistic of course, and depends on an efficient algorithm for some $\sf NP$-complete problem).


[1] Russell Impagliazzo. A Personal View of Average-Case Complexity. Complexity Conference, 1995.

Henke
  • 103
  • 2
Juho
  • 22,905
  • 7
  • 63
  • 117
10

Even without P=NP, today's computers are unbelievable powerful.


Edit 22 Jan 2018 I found out now how I should have "interpreted" the text quoted in the example below. It was my own fault, the inverse element was required to be unique. Here is my input file from 22 Dec 2014 (addinvrig.in) and here is the fixed input file from today (addinvrigFixed.in). The crucial line is (x+(-x))+((-y)+y)=((-y)+y)+(x+(-x)). The power of the automated reasoning tools themselves is still fascinating to me, even if they cannot save me from misinterpreting other people's writings.

Using automated reasoning tools is surprisingly useful for me, when I come across cited theorems where I am unsure how to "interpret" the text:

In 1974, Karvellas [ 3 ] studied additive inverse semiring and he proved the following:
(Karvellas (1974), Theorem 3(ii) and Theorem 7) Take any additive inverse semiring (S, +, ·).
(i) For all $x, y \in S$, $(x \cdot y)' = x' \cdot y = x \cdot y'$ and $x' \cdot y' = x \cdot y$
(ii) If $a \in aS \cap Sa$ for all $a \in S$ then $S$ is additively commutative.

I adapted my prover9 input files for this theorem, and was immediately shown a counter-example for the theorem as cited. Slightly modifying the assumptions produced many similar true theorems, which makes it most likely that Karvellas actually stated and proved a correct theorem, which was only cited incorrectly here. Googling for the reference of this theorem only turned up another paper which cited Karvellas even less accurate.


This is an unbelievably incomplete collection of computer aided results for specific problems which are intractable in general if P!=NP. Maybe this collection makes it clear to at least some readers that we all tend to underestimate the powers of computers in this domain. Many other answers to this question seem to suggest that there would be no big consequences if computers would get (slightly) better at solving intractable problems. But computers get better at solving intractable problems all the time (because quite some time and money is spend to make this happen), and this has very real consequences. If P=NP would be proved, then perhaps the awareness of what computers can actually do (even today) would increase, and more people would use computers to help them with such tasks. (PS: I'm convinced that P!=NP, but this is not relevant for this answer.)

Thomas Klimpel
  • 5,440
  • 29
  • 69
8

P vs. NP, technically vs. morally

As Yuval said it is possible that P=NP is technically true but morally false. P=NP is morally true (even if not necessarily technically) if there is a fast deterministic algorithm (say $O(n^2)$, or maybe even $O(n^{\lg^* n})$ with small constants, nothing like $2^{65536}+2^{1024}n^{256}$) that solves one of famous NP-complete problems like SAT. IIRC, Russell Impagliazzo once said that he would consider the P vs. NP problem as essentially settled if someone shows SAT can be solved in $O(n^{\lg n})$ time.

So what happens if P=NP is morally true?

This brings us back to why NP is such an interesting class of problems. The intuition in general is that we often want to find objects of reasonable size (formalized as polynomial size) that hold a property $Q$ and the property $Q$ is easy to verify (formalized as computable in polynomial time). This class of problems encompasses almost all problems we are interested in. To go beyond it you need to think about interactions between players like games. The number of natural interesting problems that are not in NP (or PH) is very small compared to natural interesting NP problems. If P=NP is morally true then all of these problems can be solved very fast. Just to give an example, you can learn best weights for very complicated machine learning models. You can break encryption protocols.

Comparison with the case where P$\neq$NP is morally true

By P$\neq$NP is morally true I mean that we cannot solve SAT (or any of other famous NP-complete problems) much faster than brute-force then these problems cannot be solved in practice for general inputs of even rather small input size of say 100.

Does P$\neq$NP is morally true mean we cannot solve NP-hard problems in practice?

Even if P$\neq$NP is morally true, it is still possible that for some of these problems we are interested not in general inputs and worst-case but a class/distribution of inputs that can be solved efficiently. E.g. it can be the case that solving SAT in the worth case requires exponential time but in practice we can already solve SAT on many interesting classes like software verification, hardware verification, etc. much faster.

This is kind of similar to solving a simpler problem, e.g. TSP cannot be even approximated efficiently if P$\neq$NP is morally true, yet we already can approximate the special case of TSP on Euclidean graphs.

If you know you want to solve an NP-complete problem not on general inputs but on inputs with particular properties you don't need to care about the general problem. You only need to solve the simpler problem. Unfortunately it is often not easy to determine what kind of inputs you care about in practice.

Still heuristics can perform amazingly well in practice as we see with SAT or Integer Programming or Machine Learning. (PAC learning using the very simple model of 3-DNFs is intractable if NP$\neq$RP, and lots of experts think that RP=P).

Kaveh
  • 22,661
  • 4
  • 53
  • 113
7

There are many opinions on the real-world implications of P=NP. As seen in other good responses there are mainly 2 schools of thought. One is that a P-time algorithm might be very difficult or unfeasible to actually implement due to "unexpected anomalies" associated with the abstraction. eg:

  • the program might be too "large" to actually code
  • there could be a very large constant involved such that for all instances within reach of "terrestrial computation", they are still long-running, ie the efficiency doesnt "kick in" except for very large instances. it is known that some algorithms actually fit into this case as recently pointed out by Knuth (question 15)

In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, n, is feasible. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when n exceeds the size of the universe.

A famous case study is done by Impagliazzo as cited by J. in other answer. However, his essay has been extrapolated somewhat in the meantime. Here's a great new reference by an expert that charges into this question in a sort of sci-fi future scenario, ch2/ p11. summarizing

The golden ticket: P, NP, and the search for the impossible by Lance Fortnow

  • "if it turns out that P=NP and we have efficient algorithms for all NP problems, the world will change in ways that will make the Internet seem like a footnote in history. Not only would it be impossible to describe all these changes but the biggest implications of the new technologies would be impossible to predict."

  • Algorithm quickly implemented on supercomputer. Boeing immediately contracts to get a better wing design for a new aircraft allowing it to fly from London to Sydney nonstop.

  • Search algorithm used to find a new algorithm that is even faster, optimizing the original P=NP solution. Ends with result of 42 million lines of unintelligible code. Called the "Urbana algorithm"

  • Algorithm used to find customized cancer treatments/ near-cures taylored to individuals. cures cancer, AIDS, diabetes but the common cold remains a mystery

  • Super scheduling algorithm allows forecasters to "make incredible advances in weather prediction, allowing accurate predictions of temperature, winds, cloud cover, and precipitation nearly a year ahead of time. Similar algorithms now save lives by accurately predicting storms, tornados, hurricanes so people can prepare or evacuate as needed."

  • Highly accurate face recognition

  • Computer can reconstruct 3d models of a scene in realtime from different camera angles

  • Computer algorithms control camera operations for sporting event (instead of human controlled)

  • Automated commentary and replays are generated by the algorithm including well-chosen angles and statistics, and generated in any language in realtime

  • Fantasy baseball/ sports take on new dimension with highly accurate simulations

  • Food recipe tastes improved by the algorithm

  • The algorithm could be used to "learn just about anything, including what makes good art, popular music, and words that stir the soul. remember that P=NP means that what we can test, we can find. So once you have an algorithmic process to recognize greatness, you can use the algorithm again to quickly find that greatness."

  • Politician uses computer algorithm to recognize great speeches and generate one that fits the patterns. Speech goes viral on the internet.

  • People generate complete works of art from incomplete/ unfinished art eg symphonies. they use algorithm to generate new Beatles/ Elvis records. New art, novels, plays, and poetry eg romantic comedy with Humphrey Bogart/ Julia Roberts.

  • Amazon can create customized novel for individuals on demand. NBC creates live action adventure television series created entirely by computer

  • Simulated virtual reality in video games allowing any actions of the players instead of a fixed set of possible story lines.

  • Law enforcement uses algorithm as "incredible tool in solving crimes, seemingly doing the impossible in tracking down suspects." computer algorithm can reconstruct probable faces (for composite sketches) using only DNA. Police match a murder suspect using massive search of drivers license photos database aligned with the generated sketch (from DNA).

Unfortunately not a lot that Fortnow outlines above is supported by actual scientific literature except maybe an imaginative extrapolation of Impagliazzos worlds. it would take much more to dissect this point-by-point, but to summarize, it all appears to be entertaining but fantastic/ wishful thinking (or maybe that is his veiled point). In fact there are scientific principles that are in conflict with many of the items. And notice Fortnow is a sports fan so develops an extended metaphor in that area, but could this be more an indication of humans thinking in grooves?

For example the "butterfly effect" is known to imply that accurate weather prediction past a (say) several-day horizon is impossible due to "sensitive dependence on initial conditions" (and Fortnow has later admitted on his blog to repeated criticism on exactly this point). Also, there is a lot of evidence that computers fail on highly human-subjective tasks such as generating or identifying influential art (a task that even expert humans do not succeed at consistently).

Actually the whole question is verging on counterfactual or a false premise. Note that a large majority of expert scientists polled think/ believe, despite lack of incontrovertable proof so far, P≠NP. and it is natural to compare it with other known laws/ restrictions/ limitations such as thermodynamics (eg impossibility of perpetual motion/ free energy) and statistics, eg the "no free lunch theorem".

So the bottom line is that maybe even expert scientists cannot exactly predict the outcome of P=NP. So maybe the best answer for now is to admit that humans don't have a good answer at this time.

vzn
  • 11,162
  • 1
  • 28
  • 52
3

What kind of near-immediate effect would a thorough, accurate proof of P=NP, with a provided solution, have on the practical world?

There's likely a great deal of great things that would come of it, but nobody would care.

The problem is that the foundation of (almost) all modern encryption is based on the assumption that P not equal NP. The encryption that protects your password as it goes over the internet, and as it's saved in databases. The encryption that protects credit card data as it goes over the internet... The encryption that protects the billions of daily financial transactions that tie our global economy into the giant organism it is.

Best case, P = NP means that stops. People go back to using cash and banks try to record these cash withdrawals on some disconnected medium since transactions to a central office are no longer trustworthy. This lasts for maybe a few months until better encryption is implemented globally. Best case.

Worst case, P = NP means that someone breaks the world. Currency is built on the concept of trust. You value a dollar, because you trust that your neighbor will give you a dollar's worth of goods or services for it. You value your computer saying you have 500 dollars in the bank, because you can swipe your card and get 500 dollars worth of goods and services...

What if you couldn't trust that? If P = NP, someone could impersonate various banks, government, people - and effectively randomize the amount of currency in every account. Delete the currency in every account. Sure, various banks have backups to account for it, but how long has their encryption been broken? Which transactions were good, and which were impersonated? It's impossible to know.

Once that trust is broken, chaos ensues. Any benefits from being able to deal with the Travelling Salesman Problem (for example) are ignored as people struggle to feed themselves.

Reality is likely somewhere in between, but hopefully this paints a big enough picture of how important a problem this is.

Telastyn
  • 509
  • 2
  • 10
2

Massive drop in the expense of equipment, electricity, and cloud approaches will occur. Many things are being calculated with brute force right now, or approximations that still use some heavy brute forcing. We no longer will be doing all of those massively-paralalized brute force calculations.

That is by no means the only use of cloud computing, but it is still going to be a noticeable factor on energy use, cloud processing etc. Just the energy savings could be noticeable on our carbon footprint.

AI will also become much better. We may finally have a computer that can be the best GO players, and your graphing calculator will beat you at chess.

dsollen
  • 129
  • 2
0

I wouldn't expect a revolution. We learnt to live in a world where P$\ne$ NP and found workarounds where they were critical (such as approximate solutions).

And the conjecture being disproved doesn't mean that all problems of practical usefulness would be solved in a snap of the fingers. In the first place, they still can be harder than NP.