1

Given that:

$$ SD\bigg( (r, \langle r, s \rangle),(r, b) \bigg) < \mathrm{negl}(n)$$

where $SD$ stands for statistical distance, $r$ is random uniform in $\{0,1\}^n$, $s$ is random uniform in $S \subseteq \{0,1\}^n$ and $b$ is a uniformly distributed bit.

It seems intuitive that, given a collision resistant hash function $h$, it should also hold that:

$$ SD\bigg( \big(h(s), r, \langle r, s \rangle \oplus 0\big),\big(h(s), r, \langle r, s \rangle \oplus 1\big) \bigg) < \mathrm{negl}(n)$$

but I cannot seem to prove this. I've tried using the formal definition of $SD$, but I don't know how to handle the fact there are tuples so I haven't even reached a point where I could incorporate the first claim.

Is there a way to show this from the first claim? Or am I wrong and there's a way to refute this? (for context -- I'm trying to show that $\big(h(s), r, \langle r, s \rangle \oplus b\big)$ is a statistically hiding commitment scheme).

Thanks.

Daniel S
  • 29,316
  • 1
  • 33
  • 73
Anon
  • 413
  • 2
  • 8

1 Answers1

1

I'm not sure whether this answers your question, but here is a proof that it should be computationally infeasible to show a statistical distance between the two distributions for a preimage resistant hash function. Let $h$ and $r_0$ be two values such that $$\left|\mathbb P(h(s)=h, r=r_0, \langle r,s\rangle=0)- \mathbb P(h(s)=h, r=r_0, \langle r,s\rangle=1)\right|>\mathrm{negl}$$ which is equivalent to the distinguishability criterion. This tells us that $$\left|\mathbb P(\langle r_0,s\rangle=0|h(s)=h)-\mathbb(\langle r_0,s\rangle=1|h(s)=h)\right|>\mathrm{negl}$$ note that the law of total probability the sum of the above two probabilities is 1. Let us assume then that $$\mathbb P(\langle r_0,s\rangle=0|h(s)=h)>1/2+\delta$$ for some non-negligible $\delta$ (the other case being very similar). We will use this $h$ and $r_0$ to subexhaustively construct a pre-image $s$ such that $h(s)=h$.

We restrict ourselves to evaluating $h(s)$ on inputs that statisfy $\langle r_0,s\rangle =0$, then by Bayes theorem $$\mathbb P(h(s)=h|\langle r_0,s\rangle=0)=\frac{\mathbb P(\langle r_0,s\rangle=0|h(s)=h)\mathbb P(h(s)=h)}{\mathbb P(\langle r_0,s\rangle=0}=(1+ \frac 2\delta)\mathbb P(h(s)=h)$$ and we have improved our probability of success by a factor of $(1+2 delta)$ over exhaustion. If multiple $r_i$ values deviate non-negligibly and independently, we can improve the advantage by choosing to exhaust inputs that satisfy the multiple linear constraints.

Daniel S
  • 29,316
  • 1
  • 33
  • 73