4

While reading tutorials on two-party computation I encountered two (at least formally) different definitions of security (with semi-honest adversaries). What I want to know is whether these definitions are actually different or can be shown to be equivalent. I suspect that they are different, but I might be missing something, considering that I have not read anywhere about different definitions.

In Lindell (2016), secure two-party computation is defined like this: For each party, the joint distribution of the simulation and the ideal result must be computationally indistinguishable from the joint distribution of the adversarial view and the computed output. Formally, for each $i \in \{1, 2\}$, there must exist p.p.t. algorithms $S_i$ such that $$ {\lbrace (S_i(1^n, x, f_1(x, y)), f(x, y)) \rbrace}_{x, y, n} \stackrel{c}{\equiv} {\lbrace (\operatorname{view}^\pi_i(x, y, n), \operatorname{output}^\pi_i(x, y, n)) \rbrace}_{x, y, n} \,\text{.} $$ This definition makes sense to me, since the author defines computational indistinguishability over probability ensembles indexed by both the security parameter and the input:

Two probability ensembles $X = \{X(a, n)\}_{a \in {\{0, 1\}}^*; n \in \mathbb{N}}$ and $Y = \{Y(a, n)\}_{a \in {\{0, 1\}}^*; n \in \mathbb{N}}$ are said to be computationally indistinguishable, denoted by $X \stackrel{c}{\equiv} Y$, if for every non-uniform polynomial-time algorithm $D$ there exists a negligible function $\mu(\cdot)$ such that for every $a \in \{0, 1\}^*$ and every $n \in \mathbb{N}$, $$ \lvert \Pr[D(X(a, n)) = 1] - \Pr[D(Y(a, n)) = 1] \rvert \leq \mu(n) \,\text{.} $$

This means that there must be a single negligible function $\mu$ for all inputs governing the security.

In contrast, Evans et al. (2018) define computational indistinguishability for probability ensembles only indexed by the security parameter. I have also seen definitions of computational indistinguishability like this elsewhere. Then, when defining the security they require that the joint distributions are computationally indistinguishable for all inputs. At least formally this suggests to me that here, the negligible function may depend on the input.

Answers to the following questions would be very much appreciated:

  1. Am I missing something or misunderstanding the definitions? Can the definitions shown to be equivalent exploiting the non-uniformity of the adversary?
  2. If not, is it the case that in the latter definition, it is not required that there is a single negligible function that "works" for all inputs? If I am not mistaken that would imply that the two definitions are in fact different?
  3. In case they are different: Which of the definitions should be preferred?

1 Answers1

4

Defining indistinguishability is very tricky. I actually think that the definition in the book by Evans et al. is too weak, but maybe Mike Rosulek will weigh in. If you define security by saying that for every input, the distributions of REAL/IDEAL are indistinguishable, then what you are actually saying is as follows: for every input and every distinguisher there exists a negligible function $\mu$ so that the distinguisher succeeds with probability at most $\mu(n)$. This means that you may need a different negligible function for each input. To be more concrete, if we open this up further, what this definition says is that for every input, every distinguisher $D$ and every polynomial $p$ there exists a value $n_0$ such that for every $n>n_0$ the distinguisher succeeds with probability less than $1/p(n)$. This means that $n$ can depend on the input and in particular there is no $n_0$ so that beyond $n_0$ there is indistinguishability for all inputs. Stated differently, you would potentially have to take a different security parameter for different inputs. That is not something that you would want to do (and it wouldn't even be possible since how do you agree on the security parameter without knowing the input, and how do you determine what that security parameter should be). In contrast, in the definition where the input is part of the ensemble, there is one $n_0$ for all inputs. The question of how we determine that $n_0$ is simple in practice - it is what we need for all of the primitives we use to be secure. Needless to say, Evans et al. are not doing anything different in their constructions. However, the definition is flawed, to the best of my understanding.

[On a side note, there is a short paper by Mihir Bellare called A Note on Negligible Functions that enables you to reverse the quantifiers on the adversary and negligible function. However, to the best of my knowledge, this doesn't work for inputs.]

Yehuda Lindell
  • 28,270
  • 1
  • 69
  • 86