6

The paper Fully Homomorphic Encryption over the Integers talks about a super simple symmetric key implementation on page 1 and 2.

It says that to generate a key, you pick a random odd number between $[2^{N-1} ,2^N)$ where N is the size of your key in bits.

To encrypt a bit (a 0 or a 1), you do this:
$c = pq +2r + m$

In other words:
$CipherBit = Key * RandomIntegerA + 2 * RandomIntegerB + PlainTextBit$

RandomIntegerA is there to hide the key better and RandomIntegerB is a small number to add noise to the encrypted value (to make it a "learning with error" problem).

To recover an encrypted bit you do this:
$PlainTextBit = (CipherBit \% Key) \% 2$

You can do an XOR by adding two cipher bits together, and you can do an AND by multiplying them together.

I've simplified my implementation a bit to make it easier to explain in a presentation, so i have RandomIntegerA hard coded as 1, and RandomIntegerB hard coded at 0.

This simplifies encryption (yes, making it insecure, but more easily explained) to this operation:
$CipherBit = Key + PlainTextBit$

The decryption, AND and XOR operations remain the same.

Now that the explanation is out of the way, here is my issue...

When Key is 9 (1001 in binary), let's say i have 3 cipherbits A,B and C that are valued at 10, 10 and 114100 respectively.

A quick check when decrypting them shows that they are all true bits:
$(10 \% 9) \%2 == 1$
$(114100 \% 9) \% 2 == 1$

I hit a problem though when I calculate A xor B xor C. To do this, I add them all up and get 114120, which decrypts to a zero bit:
$(114120 \% 9) \% 2 == 0$

However, 1 xor 1 xor 1 is 1!

Something I've noticed is that before doing the % 2 on A,B,C their remainders are 1, 1, 7 respectively.

When adding them together, it effectively adds (and mods) their remainders, so it makes sense that (1+1+7)%9 would be 0.

Can anyone see where I've gone wrong?

Thanks!

EDIT: Here is an even simpler counter example With a key of 9, say we have two encrypted bits: 1 and 8, which decrypt to 1 and 0 respective.

To do an XOR, we should be able to add them. When we add them, we get 9.

If we then decrypt 9, 9%9 is 0, so it says that 1 xor 0 is 0, which is wrong!

It seems like this scheme doesn't work if you hit these perfect "roll over" points.

Like if your key is 533 and you want to xor two encrypted bits: 532 (0) and 1 (1), you get 533, which decrypts to zero.

Or if your key is 5 and you want to xor 4 (0) and 1 (1), you add and get 5, which decrypts again to zero!

I've seen plenty of other papers extending the homomorphic encryption over integers... surely they can't all be wrong :P Not sure what I'm messing up though...

mikeazo
  • 39,117
  • 9
  • 118
  • 183
Alan Wolfe
  • 540
  • 2
  • 16

1 Answers1

7

The problem is that you're getting an "overflow" of the errors relative to the secret key.

We have $10 \bmod 9 = 1$, and $114100 \bmod 9 = 7$, so the "errors" in your ciphertexts are $1$, $1$, and $7$, which are all odd, hence the plaintext bits are $1$ (as desired). When you add the ciphertexts, the errors add correspondingly, so you get a ciphertext with error $9$, which normally would decrypt to $1$. However, in this case your secret key is also $9$, so in fact decryption "sees" an error of $0$, i.e., the "intended" error overflows back to zero. This is why your final ciphertext decrypts to $0$ instead of $1$.

As the paper explains, the accumulated errors must stay smaller than the secret key in order for decryption to work. (For simplicity, throughout this answer I'm assuming only non-negative errors, and that the modular reduction returns a non-negative representative.)

Chris Peikert
  • 5,893
  • 1
  • 26
  • 28