5

Beginning with the earlier works of work of Brakerski et al. or the more recent results of Kahanamoku-Meyer et al., interactive proofs of quantum advantage entail a classical verifier (Vicky) providing a quantum prover (Peggy) a circuit to evaluate a trapdoor claw-free function $f$ in superposition, e.g. Peggy prepares $\sum|x\rangle|f(x)\rangle$, and evaluates the first and second register in the Hadamard and computational basis, respectively. See, e.g., Mahadev's lecture that expands these ideas into her breakthrough procedure to classically verify a quantum computation.

These interactive procedures have Peggy to evaluate certain trapdoor claw-free functions. I'd like to see if we can modify this approach to use another hash function outsource the verification to cryptocurrency miners.

That is, consider the following modification combining proof-of-work mining with proof-of-quantumness verification. I'll initially start off with SHA256 as the hash function, as that is what is used in the bitcoin network (and I'll relax that requirement below):

  1. Let a first register have $m$ qubits, and a second register have $m-1$ qubits. The quantum computer (Peggy) prepares the registers as $\frac{1}{\sqrt {2^m}}\sum|x\rangle|f(x)\rangle$, where $f(x)$ is the last $m-1$ bits of SHA256 of $x$.

  2. Peggy measures the second register $y=f(x)$ in the computational basis and commits and broadcasts $y$. This measurement of the second register collapses the first register onto the preimages of $f$ that collide at $y$. If $f$ is two-to-one, then the two preimages $x_1,x_2$ both hash onto $y$. Importantly, although Peggy does not find the colliding pair $(x_1,x_2)$, she maintains the pair in superposition.

  3. Peggy measures this first register $d$ in the Hadamard basis and broadcasts and commits $d$. We should have that $d\cdot (x_1\oplus x_2)=0$ from the Hadamard measurement of the first register. That is, although Peggy does not announce both preimages (because she can't yet), she announces a single bit that she learned about these preimages.

  4. Bitcoin miners (Vicky) set their rigs to work, cycling through various $x's$ to find the two preimages $(x_1,x_2)$ such that $f(x_1)=f(x_2)=y$. The test that $d\cdot (x_1\oplus x_2)=0$ is also checked by other clients on the network, once both $(x_1,x_2)$ are broadcast by Vicky.

  5. Indeed, Peggy can incentivize miners to find her preimages by offering a smart-contract awarding a certain amount of cryptocurrency to the first miners that broadcast the preimages.

If the miners Vicky are always (or often) able to find pairs of preimages that hash onto the announced $y$ and that also satisfy the orthogonality test with respect to $d$, then this shows that Peggy had possession of preimages in superposition - i.e., she was a quantum computer capable of evaluating $f(x)$, the last $m-1$ bits of SHA256, in superposition.

The hash function $f(x)$ need not be SHA256, and the miners need not be bitcoin miners; rather, any cryptographically secure hash function that is easily implementable on a quantum circuit may be viable. But, there needs to be enough cryptocurrency miners that would be incentivized properly to run all the hashes to find $x_1$ and $x_2$.

Is such a proof-of-work based proof-of-quantumness realistic in the NISQ era?

Ideally it would be two-to-one or close to it; I think the the number of collisions on a random oracle, instantiated as a SHA256 hash, obeys Poisson statistics. But, as long as the hash is two-to-one often enough, I think enough statistics can be generated to reject the null hypothesis that Peggy was randomly guessing her $d$.

Mark Spinelli
  • 15,789
  • 3
  • 26
  • 85

2 Answers2

1

Very cool idea! So the main benefit you're hoping for is kind of a publicly-verifiable proof-of-quantumness, rather than than Mahadev's, which seems to depend on knowing the trapdoor of some function (so only one person could be convinced at a time).

To fake a proof, the prover would need to either (a) find $d$ such that there exists $x_1$ and $x_2$ such that $f(x_1)|_m=f(x_2)|_m$ (i.e., restricted to the last $m$ bits) and $d\cdot (x_1\oplus x_2)=0$ or (b) produce one of $x_1$ or $x_2$. I can't think of a way to reduce this to a known hardness property of a hash function $f$, but this is at most as hard as $m$-bit collision-resistance: if I have a hash collision oracle for the last $m$ bits, then I can find $x_1$ and $x_2$ that collide on $m$ bits and output any vector orthogonal to these in the first case, or one of them in the second.

A key fact here is that finding $m$-bit collisions has complexity $\approx 2^{m/2}$, not $2^m$.

The problem for the miners is, given $d$ and an $m$-bit string $s$, find $x_1$ and $x_2$ that collide and that $f(x_1)\vert_m=f(x_2)\vert_m=s$ and $d\cdot(x_1\oplus x_2)$. This doesn't quite reduce to regular pre-image search, since if there is only one pre-image, it might not return anything. But it seems like it's about as hard as two generic $m$-bit preimage searches.

So, I don't think this works because of the square root gap: the prover just needs any collision, while the verifier needs to find a collision matching the output given by the prover. When I looked it up just now, bitcoin has 28 bit difficulty, so you should be able to find collisions in about $2^{14}$ hash iterations -- I think a single properly programmed GPU could solve that in under a second. More generally, this means a cheating verifier with hardware proportional to (total bitcoin network hardware)/(square root of current challenge level). I think this will end up increasing absolutely with the size of the bitcoin network, but as a proportion of the network it will decrease.

I also have zero faith in this being a NISQ technology. Computing hash functions in superposition is a high-depth circuit; this paper estimates a gate depth that I think means 152,000 sequential gates for one SHA-256 call and about 2600 logical qubits. That's just to compute the hash function once, so that's what the quantum computer needs to do. Since the pre-image space is restricted, we could pre-compute parts of the circuit for fixed parts of the input, but a good hash function diffuses its input quickly and so we don't get much benefit.

Sam Jaques
  • 2,314
  • 7
  • 15
0

Self-answer, made CW (also, this is more of an extended comment to @SamJacques' excellent and accepted answer above).

As Sam indicates the naïve approach outlined in the question is likely insecure, as it is amenable to a birthday-attack from the prover Peggy.

Given a two-to-one hash function $f$ from $m+1$ qubits onto $m$ qubits, a cheating prover can find her own collisions $y=f(x_1)=f(x_2)$ in time $\sqrt{2^m}$, and announce a random $d$ orthogonal to $x_1$ and $x_2$. Vicky the verifier's job is harder, as Vicky must find both $x_1$ and $x_2$ only given $y$, and this can only be done in time $2^m$. Thus, if $f$ were two-to-one, the amount of security needed means that Vicky wouldn't ever keep up to be able to validate Peggy's output.

However, a random oracle from $m+1$ qubits to $m$ qubits is not, generically, two-to-one (although the expected number of collisions is two). Indeed I think the number of collisions should obey the Poisson distribution, much as defects randomly sprinkled on chips on a wafer do.

I used to think this lack of a two-to-one response from random hash functions was a bug in the proposal, but it might be a feature. We might be able to leverage this to repair the proposal in light of Sam's birthday attack.

For example, if miners are incentivized properly, then they should find all of the preimages that hash onto the broadcast image. Accordingly if they only find one preimage, then we can assume that there was only one preimage to be found (if the mining reward were high enough).

We can use this to look at all of Peggy's announcements, and plot a histogram of the number of preimages found by the miners. For an honest Peggy, this histogram should obey the Poisson statistics, with $\lambda=2$ as there are twice as many input strings $2^{m+1}$ as there are output strings $2^m$ and the average number of collisions is two.

Each time there is only one preimage (or, three or more preimages), Peggy's announced $d$ can be random. Each time there are two preimages, the announced $d$ better satisfy the orthogonality test with respect to the XOR of the preimages found by the miners.

But, an honest or even a cheating Peggy has no way of knowing, a-priori, how many preimages each of her announced $y$ has. Sure, she could use the birthday attack to broadcast $y$ if she's able to find collisions $x_1$ and $x_2$, but she also has to announce just as many $y$ that only have a single preimage, so as to satisfy the Poisson test.

Let a cheating Peggy announce a $d$ and a $y$ that she hopes only has one preimage. But, she should not be able to know whether her announced $y$ has one, or two, or more than two preimages. If her $y$ happens to have two preimages, she will get caught in her lie if $d$ isn't orthogonal to $x_1\oplus x_2$. Some appeal to the Chernoff bound and/or Hoeffding's inequality could find this cheat.


The bigger issue that I've been sweeping under the rug is of course that SHA256 is most likely not implementable in the NISQ-era. Sam is right in that we don't need to do a full Grover run of SHA256, but even one single evaluation is too deep of a circuit for now. I think estimates of the number of qubits can be brought down, but the circuit depth may always be too high, and the overall fidelity too low, for this to be realistic.

Mark Spinelli
  • 15,789
  • 3
  • 26
  • 85