55

Reading discussions of the recent quantum supremacy experiment by Google I noticed that a lot of time and effort (in the experiment itself, but also in the excellent blog posts by Scott Aaronson and others explaining the results) is spent on verifying that the quantum computer did indeed compute the thing we believe it to have computed.

From a naive point of view this is completely understandable: the essence of any quantum supremacy experiment is that you have the quantum computer perform a task that is hard for a classical computer to achieve, so surely it would also be hard for the classical computer to verify that the quantum computer did complete the task we gave it, right?

Well, no. About the first thing you learn when starting to read blogs or talk to people about computational complexity is that, counter-intuitive as it may seem, there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems.

Thus it seems that Google could have saved themselves and others a lot of time by using one of these problems for their quantum supremacy experiment rather than the one they did. So my question is why didn't they?

An answer for the special case of the NP problem factoring is given in this very nice answer to a different question: https://cs.stackexchange.com/a/116360/26301. Paraphrasing: the regime where the quantum algorithm starts to out perform the best known classical algorithm starts at a point that requires more than the 53 qubits currently available.

So my follow-up question is: does this answer for the special case extend to all NP-problems where quantum speedups are expected or is it specific to factoring? And in the first case: is there a fundamental reason related to the nature of NP that quantum-supremacy 'kicks in later' for NP problems than for sampling problems or is it just that for NP problems better classical algorithms are available due to their being more famous?

shortstheory
  • 103
  • 3
Vincent
  • 761
  • 1
  • 5
  • 12

3 Answers3

75

there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems.

This statement is wrong. There are many NP problems which are easy to solve. "NP" simply means "easy to verify". It does not mean hard to solve.

What you are probably thinking of is NP-complete problems which is a subset of the NP problems for which we have very, very good evidence to think they are hard. However, quantum computers are not expected to be able to solve NP-complete problems significantly more "easily" than regular computers.

Factoring is also thought to be hard, but the evidence for this is only "very good" and not "very, very good" (in other words: factoring is likely not NP-complete). Factoring is one of very few natural problems which falls in between not being NP-complete and not being easy.

The list of problems that we know that are easy to verify, easy to solve on a quantum computer but hard classicly, is even shorter. In fact, I do not know of any problem other than factoring (and the very closely related discrete logarithm problem) with this property.

Moreover, any easy to verify problem would likely have the same issue as factoring: $53$ qubits is not that many, and $2^{53}$ is huge, but just within reach of classical computing. $2^{53}$ less than $10^{16}$, and most classical computers can execute on the order of $10^9$ operations per second. We could run through all possibilities in about $1/3$rd of a year on a single classical desktop computer.

Quantum computers have very few applications which they're known to be good at, and are essentially useless for most hard NP problems.

Tom van der Zanden
  • 13,493
  • 1
  • 39
  • 56
9

Because then their experiment would have been a complete failure.

As I wrote in an answer on a sister site (which was somewhat poorly received there, but I think your question validates what I was saying about how a general audience interprets it):

[the hyping of the result] plays on a discrepancy between what they mean by quantum supremacy (QS) and what people tend to think QS means.

What I find most people think QS is supposed to mean, and what I assumed it meant until a month or so ago, was that there exists a computable problem (in the CTT sense of computation) and an actual quantum computer, such that, at some scales, the problem is tractable on the quantum computer but intractable on all classical computers.

The problem the Google QC folks have demonstrated is not computation in the CTT sense. It is a physical process of sampling that involves computations as part of the process, and as with any physical process, it can be simulated approximately by computation. They have good reason to believe (proof? I'm not sure but it should reasonably be assumed true by default anyway) that computation to similate the process is going to be intractably slow. This is not surprising at all. It's a fundamental consequence of quantum mechanics that lots of physical processes will have that property.

7

Quantum computing is not a piece of magic, and there seems to be a widespread misconception about the power of quantum computers. I am by no means an expert in this field, but as far as I know QC is very suitable for computational problems that employ some kind of cyclic structure. This happens to be true for problems like the integer factoring problem and the discrete logarithm problem, thus, if quantum computers become practical, asymmetric ciphers like RSA and any kind of DSA (including ECDSA) will become obsolete. However, it is not known (or widely believed) that QC breaks symmetric ciphers (the key size has to be increased, though) or, more generally, inverts generic one-way functions. QC is also not known to break lattice-based asymmetric crypto, therefore, NTRU, McEliece etc. do apparently survive QC.

Therefore, even if you have a fully functional QC, you cannot arbitrarily pick any computational problem and conjure the solution instantly.

countermode
  • 171
  • 3