7

In the paper A quantum-implementable neural network model (Chen, Wang & Charbon, 2017), on page 18 they mention that "There are 784 qurons in the input layer, where each quron is comprised of ten qubits."

That seems like a misprint to me. After reading the first few pages I was under the impression that they were trying to use $10$ qubits to replicate the $784$ classical neurons in the input layer. Since $2^{10}=1024>784$, such that each sub-state's coefficient's square is proportional to the activity of a neuron. Say the square of the coefficient of $|0000000010\rangle$ could be proportional to the activation of the $2$-nd classical neuron (considering all the $784$ neurons were labelled fom $0$ to $783$).

But if what they wrote is true: "There are 784 qurons in the input layer" it would mean there are $7840$ qubits in the input layer, then I'm not sure how they managed to implement their model experimentally. As of now we can properly simulate only ~$50$ qubits.

However, they managed to give an error rate for $>7840$ qubits (see Page 21: "Proposed two-layer QPNN, ten hidden qurons, five select qurons - 2.38"). No idea how's they managed to get that value. Could someone please explain?

Sanchayan Dutta
  • 18,015
  • 8
  • 50
  • 112

1 Answers1

2

As of now we can properly simulate only ~50 qubits.

You are talking about a full quantum simulation of a vector containing $2^{50}$ elements.

In quantum neural networks and quantum annealing, we usually only need something close to the ground state (optimal value) rather than the absolute global minimum.

Here is another example from 2017 where 1000 qubits are simulated:

enter image description here

Here's an example from 2015 where 1000 qubits are simulated (it says bits rather than qubits, but they are the qubits of the D-Wave device):

enter image description here