10

Is it possible to develop a scheme where two parties, unsure if they have the same secret, can verify that the other does or does not share the same secret, without one party being able to cheat and come away with more knowledge than the other? (Or that if one does, that this can be detected by the other party.)

Let me explain using an example from our friends Alice and Bob. Their boss Charlie has just given Alice and Bob symmetric keys he requires them to use to communicate with him; $K_a$ and $K_b$, respectively. Alice and Bob expect their keys to be different, but are concerned that perhaps Charlie has reused the same key out of laziness.

Alice and Bob would like to verify each other's keys in such a way that, either:

  1. They both come away certain that $K_a = K_b$; or
  2. They both come away believing that $K_a \not= K_b$.

In other words, they want to prevent the possibility that:

  1. One party can discover that $K_a = K_b$ but conceal this, so the other party comes away believing that $K_a \not= K_b$ and unaware that foul play has taken place.

Note: Let's assume here that Alice and Bob are communicating on a secure channel and have authenticated each other. Let $h()$ be a secure hash function.

A naive approach would be a scheme like:

  1. ---> Alice sends $h(h(K_a))$ to Bob
  2. <--- Bob sends $h(K_b)$ to Alice

The problem here is that Bob could receive $h(h(K_a))$, calculate $h(h(K_b))$, discover they are equal but instead send $h(\textrm{random data})$. Alice might now believe her key is different to Bob's, when in fact he is able to decrypt all her messages.

I have tagged this with commitments since my guess is that the only way to solve this problem, if indeed it can be solved, would be for both Alice and Bob to both commit to something about the verification messages they are about to transmit, so that they cannot later change what they send without the other party realising. They might also be able to use some zero-knowledge proofs on each other, so I tagged that too.

Another idea I had was that Alice and Bob could reveal their keys one bit at a time through a commitment scheme. That is, each sends $h(\textrm{bit}_i||\textrm{nonce}_i)$ for each bit $i$ of their keys. They then take it in turns to reveal the nonces; as soon as there is a mismatch, they stop sending nonces. A cheating party learns at most a few bits more than the other (depending on how well they can guess bits), which is still not ideal, but I suppose it is better than one party learning everything.

codebeard
  • 306
  • 1
  • 9

6 Answers6

8

By Theorem 3 on page 15 of this paper,
no secure-with-abort protocol for equality of long strings can be within 1/5 of fair.
If there is a protocol for equality on a domain of size at least 3 which is
secure against honest-but-curious adversaries, then oblivious transfer protocols exist.
If oblivious transfer protocols exist, then there are protocols for equality which give $P_1$
security-and-guaranteed-output-delivery and have $P_2\hspace{-0.03 in}$'s possible outputs be $\: \{\hspace{-0.02 in}\neq,\hspace{-0.03 in}\neq_{\hspace{-0.03 in}\perp},\hspace{-0.04 in}\perp,\hspace{-0.04 in}=\hspace{-0.02 in}\}$
and give $P_2$ the security that consists of indistinguishability from the ideal functionality
for which [if the inputs are equal then $P_2$ outputs the simulator's choice of $=$ or $\perp]$
and [if the inputs are not equal then the simulator chooses between $[P_2$ outputs $\neq]$
and $[P_2$ has a small chance of outputting $\perp$ and otherwise outputs $\neq_{\hspace{-0.03 in}\perp}\hspace{.02 in}]]$.


Such a protocol can be constructed as follows:


Start with Section 3.2, even though the domains are both super-polynomially large.
Look at $\operatorname{Sharegen}_r$, which is the box at the top of page 9.
If $\: x=y \:$ then set $\: \hspace{.03 in}j = i^* \:$, $\:$ else choose $\hspace{.03 in}j$ uniformly at random from $\: \{1,...,r\}\;$.
Ignore for loop immediately after $\operatorname{Sharegen}_r\hspace{-0.04 in}$'s choice of $i^*\hspace{-0.03 in}$.
Instead, this phase is to set $a_{i^*}$ and $b_j$ to the relation ($=$ or $\neq$) that holds between $x$ and $y$,
give $i^*$ and a share of $a_{i^*}$ to $P_1$, give $\hspace{.03 in}j$ and a share of $b_j$ to $P_2$, and give each of them $r$
real-or-simulated shares for the $a$s and $b\hspace{.02 in}$s in order to hide $i^*$ and $\hspace{.03 in}j$ from $P_2$ and $P_1$ respectively.
Note that either the integer prefixes must have the same length as each other
for the same party, or a stateful MAC must be used instead of concatenation.

Now, look at the box at the top of the next page, but ignore its step 3.
Just like that paper mentions on page 10 in the paragraph that starts with "Proof", the rest
of this paragraph will ignore the MACs and the fact that the parties are to be checking them.

If $P_1$ gets $P_2\hspace{-0.03 in}$'s share of $a_{i^*}$ and $a_{i^*}$ is $=$ then $P_1$ [outputs $=$, sends the next message,
and then halts]. $\:$ Otherwise, $P_1$ will output $\neq$ when $P_2$ aborts or the for loop reaches its end.
If $P_2$ gets $P_1\hspace{-0.04 in}$'s share of $b_j$ and $b_j$ is $=$ then $P_2$ outputs $=$ and halts. $\:$ If the for loop reaches its end
and $b_j$ is $\neq$ then $P_2$ outputs $\neq$. $\:$ If $P_2$ gets the simulated-share for $b_{j-1}$ but does not get
$P_1\hspace{-0.04 in}$'s share of $b_j$ then $P_2$ outputs $\perp$. $\:$ If $P_1$ aborts at any other time then $P_2$ outputs $\neq_{\hspace{-0.03 in}\perp}$.

Note that for an adversary, acting as either party, the ability to abort and get significant information about whether or not the other party got $\neq$ for [the $a$ or $b$ value which the honest party would learn if both parties were honest] would let the adversary significantly bias the other party's output.
(i.e., significantly beyond what can already be done in the ideal model)


.


For that construction, "a small chance" means "probability $1/r$".
Since it gives $P_1$ gets security-with-guaranteed-output-delivery, which is even stronger than security-with-fairness, the above protocol runs into the limits imposed by the impossibility proof. $\:$ On one hand, just ignoring the subscript on $\neq_{\hspace{-0.03 in}\perp}$ gives $P_2$ security-with-abort,
so doing that must come at the cost of significant unfairness.
On the other hand, doing that and replacing $\perp$ with $=$ gives $P_2$ approximate fairness,
so doing that must come at the cost of security-with-abort.


However, time-lock puzzles are enough to get around that, yielding a protocol such that

$P_1\hspace{-0.04 in}$'s possible outputs are $\: \{\hspace{-0.02 in}\neq,\hspace{-0.03 in}\neq_{\hspace{-0.03 in}\perp},\hspace{-0.03 in}=_{\hspace{-0.03 in}\perp},\hspace{-0.04 in}=\hspace{-0.02 in}\} \:$ and $P_2\hspace{-0.03 in}$'s possible outputs are $\: \{\hspace{-0.02 in}\neq,\hspace{-0.03 in}\neq_{\hspace{-0.03 in}\perp},\hspace{-0.03 in}\perp,\hspace{-0.03 in}=_{\hspace{-0.03 in}\perp},\hspace{-0.04 in}=\hspace{-0.02 in}\} \;$.
$P_1$ still gets security-and-guaranteed-output-delivery,
but that delivery will take time $T$ when $P_2$ aborts.
$P_2\hspace{-0.03 in}$'s standard-model security (i.e., what applies even if $P_1$ can quickly break the puzzles)
is like it was for the previous protocol, but if the inputs are equal then the simulator
can alternatively choose $=_{\hspace{-0.03 in}\perp}$. $\:$ If the puzzles are secure against $P_1$, then $P_1$ aborting
or not is statistically close to independent of $P_2\hspace{-0.03 in}$’s input and for equal inputs, the probability
of $P_2$ outputting $\perp$ is statistically close to $1/r$ times the probability of $P_1$ aborting.

.


Time-lock puzzles can be brought in to do that as follows:


The initialization phase will produce $\:a_1,...,a_r\:$,$\:$ rather than just $a_{i^*}$. $\;\;\;$ If the inputs are equal then for all elements $i$ of $\:\{i^*,...,r\}\:$,$\:$ $a_i$ will be $=$. $\;\;\;$ Other than that, the $a$s will all be $\neq$.
$P_1$ will get a share of each $a$, but $i^*$ is to stay hidden from $P_1$. $\:$ $P_2$ will get time-lock puzzles that encode shares of the $a$s, rather than real-or-simulated shares of not-necessarily-existing $a$s. $\:$ (Those shares will not need to be at-all hidden from $P_2$, which may make things easier
than they'd otherwise be.) $\:$ An $(r\hspace{-0.03 in}+\hspace{-0.06 in}1)\hspace{-0.02 in}$-time MAC will be used for messages from $P_2$,
since in addition to the puzzles (rather than the encoded-shares directly)
being tagged, $P_2\hspace{-0.03 in}$'s share of $b_j$ will be authenticated with $\: i = 0 \;$.
After the for loop reaches its end, $P_2$ will send that share and its MAC-tag to $P_1$.

Just like in my previous description, the rest of this description will ignore the MACs.
If $P_2$ aborts before sending the puzzle that encodes its share of $a_1$, then $P_1$ outputs $\neq$.
If $P_2$ aborts at any later time, then $P_1$ sets $c$ to be the solution to the most recent puzzle
and outputs $c_{\hspace{-0.03 in}\perp}$. $\:$ If $P_2$ sends its final message then $P_1$ and $P_2$ both output $b_j$.
If $P_2$ does not get $P_1\hspace{-0.04 in}$'s simulated-share of $b_{j-\hspace{-0.02 in}1}$ then $P_2$ outputs $\neq_{\hspace{-0.03 in}\perp}$.
If $P_2$ gets that but does not get $P_1\hspace{-0.04 in}$'s share of $b_j$ then $P_2$ outputs $\perp$. $\:$ If $P_2$ gets $P_1\hspace{-0.04 in}$'s share of $b_j$
but $P_1$ aborts before sending its final message, then $P_2$ sets $c$ to be $b_j$ and outputs $c_{\hspace{-0.03 in}\perp}$.


.


That sort of idea can in fact be used can be used for securely computing arbitrary functions,
by modifying how I was describing their use for equality so that [for $i$ from $1$ to $i^*\hspace{-0.06 in}-\hspace{-0.06 in}1$,
$a_i$ will be $\perp$ rather than $\neq]$ and [the rest of the $a$s will be set to the function's output] and
replacing ["then $P_1$ outputs $\neq$","outputs $c_{\hspace{-0.03 in}\perp}\hspace{-0.02 in}$","then $P_2$ outputs $\neq_{\hspace{-0.03 in}\perp}\hspace{-0.02 in}$"] with ["then $P_1$ outputs $\perp\hspace{-0.03 in}$",
"if $c$ is $\perp$ then that party outputs $\perp$ else that party outputs $c_{\hspace{-0.03 in}\perp}\hspace{-0.03 in}$","then $P_2$ outputs $\perp \hspace{-0.03 in}$"] respectively.
(That is quite different from the traditional way of using time-lock puzzles for something
resembling fairness, which is to decrease $T$ geometrically between pairs of messages.)
However, protocols using time-lock puzzles as I described
for computing equality has the advantage of offering something non-trivial
in the way of fairness to $P_2$ even if $P_1$ can quickly break the puzzles.


(There was a non-proof after this sentence, but it was just for my conversation
with Yehuda Lindell, rather than being related to the rest of this answer.)

5

This cannot be done. It is provably impossible. In order to explain this in technical terms, what you are looking for is a FAIR protocol to compute equality of long random strings (I added the latter since it adds a constraint and so in theory could make it easier). In any case, if I had such a protocol, then I could toss a fair unbiased coin. Here is the coin-tossing protocol:

  1. Party 1 chooses two long random strings; Party 2 chooses a random bit.
  2. Party 1 and Party 2 run oblivious transfer, in which Party 2 gets one of the strings (and learns nothing of the other).
  3. Party 1 chooses one of the two strings at random.
  4. Party 1 and Party 2 run the secure equality protocol that we are assuming (by contradiction) to exist
  5. If they receive EQUAL from step (4) then they output 0; otherwise, they output 1

Note that since step 4 is fair and they are guaranteed to receive the same output, this computes a fair coin. (Need a formal proof but I am sure this will go through.)

We conclude with the fact that Cleve in 1986 proved that it is impossible to toss a fair unbiased coin. (By the way, you may be concerned that my coin tossing protocol isn't secure against malicious adversaries since Party 1 can choose both strings to be the same and then bias the outcome. However, Cleve's proof holds for fail-stop adversaries as well; these follow the protocol specification but just may halt early. So, the above protocol is secure for fail-stop, which suffices to reach a contradiction.) End of Proof.

So, this is impossible and cannot be done. The previous answer is basically saying to do gradual release which is an approximation of fairness. This may suffice for what you want, but it doesn't give full fairness (since we have proven this is impossible).

=========== IMPORTANT ====================

The above answer is INCORRECT. It only proves that it is impossible to securely compute equality on large strings in the fail-stop model (which is quite obvious). I am new to this site, so I'm not sure if I should add this disclaimer or erase the above (I have a problem with erasing history, so I'm adding this instead).

Yehuda Lindell
  • 28,270
  • 1
  • 69
  • 86
4

One could split both secrets into smaller parts, commit to parts and "gradually" open that commitments to each other, so that no party is better than (ahead of the other) one such part.

For example, let secret be a big number split into bits. With an additively homomorphic bit commitment scheme, the other party could verify that bit commitments correspond to commitment to initial secret. At each round, both parties open one bit. Any party would stop talking if the other has cheating.

Number of parts is a major trade-off with this scheme: larger number results in smaller advantage achievable by cheating while requiring longer protocol run.

Vadym Fedyukovych
  • 2,347
  • 14
  • 19
1

I'm new here so I'm not sure about the best way to hold this discussion. So, I am adding a different answer to relate to why my proof sketch showing the impossibility of the problem in this question, versus Ricky's proof above that the protocol in this paper (page 16) is impossible.

The answer is very connected to technical details to how you define and model fail-stop adversaries. First, it is clear that Ricky's protocol is NOT secure for malicious adversaries since a malicious adversary can use a different input that will force the output deterministically. However, Ricky indeed is claiming only fail-stop security. The problem is that the fair protocols in the cited paper are only secure for malicious adversaries but not for fail-stop adversaries. How is this possible? Well, the straightforward way that you would define fail-stop adversaries have the property that the corrupted parties' input cannot be changed even in the ideal model. However, the fair protocols we have all require the simulator to choose the input dependent on when the adversary halts. Thus, they would be secure as long as the simulator CAN change the input of the corrupted party in the ideal model. However, if we allow this, then Ricky's protocol would no longer be secure even for fail-stop adversaries. (I hope that this makes sense; it's very involved so hard to get across in a forum like this.)

In contrast, in my proof, the oblivious transfer IS secure for fail-stop adversaries (and no fairness is needed for it).

Does this make sense now?

==== SEE CORRECTION ABOVE: THIS ONLY PROVES IMPOSSIBILITY FOR FAIL-STOP. THANK YOU TO RICKY! ====

Yehuda Lindell
  • 28,270
  • 1
  • 69
  • 86
1

They both symmetrical encrypt their keys by itself in an algorithm (or aes with enough iterations) that it takes minutes, even hours to complete (this gives ek1). Then they will do the same thing again (encrypt ek1 by itself) (this gives ek2) and send ek2 to the other person when they both say they are done. If they don't align, both parties then send ek1 to each other and both parties confirm that the ek2 is in fact ek1 encrypted by itself.

If they send ek2 and they don't match up so both people send ek1, then alice finds that if she decrypts bob's ek2 by his ek1, she knows that bob didn't actually send a valid ek2, but he instead sent her random characters. This could be because he got her ek2 and it matched his, so he sent random characters back.

If they both say they are done and alice sends her ek2, then bob takes forever to send his ek2 and they don't match so they send ek1 and bob's ek2/ek1 pair is valid, she knows he saw that their ek2s matched and then just encrypted random text by itself and sent it as an ek2.

If they both send ek2 at roughly the same time and it doesn't match, then they send ek1, alice can be sure bob didn't cheat by decrypting his ek2 by his ek1 and bob can be sure alice didn't cheat by decrypting her ek2 by her ek1.

If they both send ek2 and they match, they know for sure that there is fair play.

Edit: Pointed out by @codebeard If bob wants to cheat, he can compute a hash of random chars and compute the real one after comparison and compare with Alice's later. This will only work, however, if they are in fact the same.

Kai Arakawa
  • 145
  • 9
0

If its possible that either of the party can get hold of the other's cipher text during transmission and decode it, then they could use that itself for determining if they can decode each-other's messages.

Since either of the party can use a random-key thinking the other person will be using the real one, the effectiveness of this technique would be the consequences of sending a message (to their boss) with an invalid key.

If their boss is nice, they could send a few mails with some not-so-important content, while at the same time requesting the boss to ignore their first few mails but to revert with an acknowledgement, giving each other a chance to try and read each other's messages.

Ravindra HV
  • 204
  • 6
  • 14