9

Recent quantum supremacy claims rely, among other things, on extrapolation, which motivates the question in the title, where the word "adversarial" is added to exclude such extrapolation-based quantum supremacy claims.

To clarify the exact meaning of the question imagine that we are looking for a benchmark "Bench" useful in the following scenario. Bob (who does not have access to a quantum computer) wants to verify the Eve's claim that she has a moderately-powerful quantum computer. They follow the following generic protocol:

  1. Bob: what is your score on Bench?
  2. Eve: $N$.
  3. Bob and Eve exchange messages following the Bench protocol, e.g. Bob can send challenges to Eve, and Eve should then send responses to those challenges.
  4. Following the protocol Bench, Bob reaches the conclusion on whether Eve has passed.

If Eve indeed has the powerful quantum computer, she should be able to pass this test with high probability (aka sensitivity) and if she does not, she should not be able to pass it with "only" using a few powerful classical supercomputers / clusters with non-negligible probability $= 1 -$specificity for sufficiently large $N$. Note that specific values of sensitivity and specificity are not important, since if one has a benchmark with sensitivity and specificity of $2/3$, one can repeat it e.g. 193 times and apply majority voting to get sensitivity and specificity of $99.9999\%$.

Architecture-independent means that the benchmark is useful for Eve's claim for a wide range of architectures Eve could use. Finally, by "practical" in the question I mean that the benchmark should be useful for near-term quantum computers. For example, one could use elliptic curve discrete logarithm problem as a benchmark, but it seems that this would require a quantum circuit depth of the order of $10^{10}$ to beat a classical supercomputer, thus would not be applicable to near-term quantum computers.

Recently IBM popularized quantum volume as a benchmark, which, similarly to Sycamore quantum supremacy claim, relies on sampling of the output distribution of a pseudo-random quantum circuit. However, it seems that these benchmarks all rely on a classical simulator or extrapolation (digital error model), neither of which satisfy the premise of this question:

  • E.g. if Eve claims log quantum volume large enough to show quantum supremacy (e.g. 100), then it is not clear how would we verify that, since the verifications of sampling-based tests above rely on our ability to compute the probabilities of the outputs via simulation of a quantum circuit on a classical hardware.
  • If Eve claims smaller log quantum volume (e.g. 42), for which the circuit can be simulated on classical hardware, nothing prevents Eve to simulate it herself without having access to a quantum computer.
  • Finally, if Eve's argument relies on extrapolation, nothing prevents her from using classical simulation for small circuit depths (which she can simulate and Bob can verify) and picking random outputs for larger ones (which Bob can't verify).

Similarly, "Advanced quantum supremacy using a hybrid algorithm for linear systems of equations" does not seem to be the answer, since it seems to rely on the classical simulation to verify the results of the quantum computation.

fiktor
  • 348
  • 1
  • 5

1 Answers1

4

For an "adversarial approach" that may become practical, I am partial to Kahanamoku-Meyer et al.'s "Classically-Verifiable Quantum Advantage from a Computational Bell Test" (arXiv, Nature).

Briefly, Bob the classical skeptic chooses two secret prime numbers $(p_1,p_2)$ and provides their product $N=p_1\times p_2$ to Eve the alleged quantum prover. Eve then (1) calculates, in superposition, $\sum |x\rangle|x^2\bmod N\rangle$, (2) measures the second register in the computational basis to return a string $y$ which collapses the first register into a superposition of the two preimages $x_1,x_2$ where $y\equiv x_1^2\equiv x_2^2\bmod N$, and, (3) depending on a challenge from Bob, measures the first register in the computational basis or in the Hadamard basis, returning a string $d$, and sends both $y$ and $d$ to Bob. Bob then uses his knowledge of $p_1,p_2$ to decide whether the string $d$ satisfies a simple Boolean formula with respect to the preimages $x_1,x_2$. It can be shown (subject to pretty standard complexity theoretical assumptions) that Eve can only reliably pass Bob's test if she either factored $N$ or really held $\sum |x\rangle|x^2\bmod N\rangle$ in superposition.

Considering all your desiderata, I think the particular embodiments of Kahanamoku-Meyer et al.'s test are designed to work particularly well with in-situ measurements and the original test cases were run on trapped-ion computers, and it's not yet practical to challenge classical computers, but it's appealing insofar as $x^2\bmod N$ is much easier to calculate than $a^x\bmod N$, which is required by Shor's algorithm and its kin.

Mark Spinelli
  • 15,789
  • 3
  • 26
  • 85