8

Here is the description of Deutsch-Jozsa problem: Let $f: \{0,1\}^n \mapsto \{0,1\}$ be a function promised to be either constant or balanced ('balanced' means that $f$ outputs as many 0's as 1's).

I need to show that a probabilistic classical algorithm making two evaluations of $f$ can with probability at least 2/3 correctly determine whether $f$ is constant or balanced.

There is also a hint: Your guess does not need to be a deterministic function of the results of the two queries. Your result should not assume any particular a priori probabilities of having a constant or balanced function.

I'm a bit lost here. My thinking is that if the two evaluations are the different, then $f$ is definitely balanced. Otherwise, $f$ could be either constant or balanced. But the chance of success would be depending on the probability of $f$ being constant, which is against the given hint.

How should I approach this problem?

2 Answers2

8

Algorithm:

  1. Independently and uniformly sample two queries from $\{0, 1\}^n$, denoted as $q_1$ and $q_2$.

  2. Evaluate $f(q_1)$ and $f(q_2)$. If $f(q_1) \neq f(q_2)$, output $\color{blue}{\mathsf{balanced}}$; otherwise, output $\color{blue}{\mathsf{balanced}}$ with probability $\frac{1}{3}$ and $\color{red}{\mathsf{constant}}$ with probability $\frac{2}{3}$.

Analysis:

We have

\begin{align} &P(\text{output }\color{blue}{\mathsf{balanced}} \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &P(\text{output }\color{blue}{\mathsf{balanced}}, f(q_1) \neq f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\ &\quad + P(\text{output }\color{blue}{\mathsf{balanced}}, f(q_1) = f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &P(\text{output }\color{blue}{\mathsf{balanced}}\mid f(q_1) \neq f(q_2), f \ \color{blue}{\mathsf{balanced}})\cdot P(f(q_1) \neq f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\ &\quad +P(\text{output }\color{blue}{\mathsf{balanced}}\mid f(q_1) = f(q_2), f \ \color{blue}{\mathsf{balanced}})\cdot P(f(q_1) = f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &1\cdot \frac{1}{2} + \frac{1}{3} \cdot \frac{1}{2} = \frac{2}{3} \end{align} and \begin{align} P(\text{output }\color{red}{\mathsf{constant}} \mid f\ \color{red}{\mathsf{constant}}) = \frac{2}{3} \end{align} Therefore, \begin{align} &P(\text{output is correct}) = P(\text{output }\color{blue}{\mathsf{balanced}}, f \ \color{blue}{\mathsf{balanced}}) + P(\text{output }\color{red}{\mathsf{constant}}, f\ \color{red}{\mathsf{constant}}) \\ =\ &P(\text{output }\color{blue}{\mathsf{balanced}} \mid f \ \color{blue}{\mathsf{balanced}})\cdot P(f\ \color{blue}{\mathsf{balanced}}) \\ &\quad + P(\text{output }\color{red}{\mathsf{constant}}\mid f\ \color{red}{\mathsf{constant}})\cdot P(f\ \color{red}{\mathsf{constant}}) \\ =\ &\frac{2}{3}\cdot P(f\ \color{blue}{\mathsf{balanced}}) + \frac{2}{3} \cdot P(f\ \color{red}{\mathsf{constant}}) = \frac{2}{3} \end{align}

PSPACEhard
  • 10,381
0

I had the same question that you had, and @PSPACEhard's answer helped me a lot to understand the reasoning. So I'm going to provide another answer with a slightly different approach:

At first sight, you would think that with only two evaluations of $f$, your algorithm should always output "balanced", because there are many more balanced functions than there are constant. However, if you design your algorithm in a certain way, you can ignore the probabilities of the function being balanced vs. constant.So below, I will give the reasoning that would allow you to design the algorithm. Notation: $$ f_b = \text{The event that the function is balanced}\\ f_c = \text{The event that the function is constant}\\ O_b = \text{The event that our algorithm outputs "balanced"}\\ O_c = \text{The event that our algorithm outputs "constant"}\\ Eq = \text{The event that }f(q_1)=f(q_2)\\ Diff = \text{The event that }f(q_1) \neq f(q_2)\\ $$

Now, given an algorithm that does two evaluations, and then output either balanced or constant: $$ P(\text{answer is correct}) = P(O_b,f_b) + P(O_c,f_c) $$ The term $P(O_b,f_b)$ can be written as: $$ P(O_b,f_b) = \\ P(O_b|f_b) \cdot P(f_b) = \\ (P(O_b,Eq|f_b)+P(O_b,Diff|f_b)) \cdot P(f_b) = \\ (P(O_b|Eq,f_b) \cdot P(Eq|f_b) + P(O_b|Diff,f_b) \cdot P(Diff|f_b)) \cdot P(f_b) $$

Now, since we want an algorithm that makes it's decisions only based on what was observed, we have: $$P(O_b|Eq, f_b) = P(O_b|Eq)$$ and $$P(O_b|Diff, f_b) = P(O_b|Diff)$$ Also, any non-stupid algorithm should output balanced with probability 1 in case of Diff, thus $P(O_b|Diff) = 1$. Taking the above observations into account, the expression becomes: $$ P(O_b,f_b) = \\ (P(O_b|Eq) \cdot P(Eq|f_b) + 1 \cdot P(Diff|f_b)) \cdot P(f_b) = \\ (\frac{1}{2}P(O_b|Eq) + \frac{1}{2}) \cdot P(f_b) $$ With a similar analysis we find: $$ P(O_c, f_c) = \\ P(O_c|Eq) \cdot P(f_c) $$ And so, going back to our original equation: $$ P(\text{answer is correct}) = (\frac{1}{2}P(O_b|Eq) + \frac{1}{2}) \cdot P(f_b) + P(O_c|Eq) \cdot P(f_c) $$ Now, we notice that if choose our values for $P(O_b|Eq)$ and $P(O_c|Eq)$ carefully (which we are free to do, because this is literally what defines our algorithm's behavior), we can make $(\frac{1}{2}P(O_b|Eq) + \frac{1}{2})=P(O_c|Eq)=\alpha$, and the equation above becomes: $$ P(\text{answer is correct}) = \alpha \cdot (P(f_b) + P(f_c)) = \alpha $$ which is not dependent any more on $f_b$ or $f_c$!

So let's see what values achieve this design: $$ \frac{1}{2}P(O_b|Eq) + \frac{1}{2} = P(O_c|Eq)\\ \text{but } P(O_c|Eq) = 1 - P(O_b|Eq)\\ => P(O_b|Eq) = \frac{1}{3} \text{ and } P(O_c|Eq) = \frac{2}{3} $$ And $P(\text{answer is correct}) = \alpha = \frac{2}{3}$

And so you get the algorithm described in @PSPACEhard's answer.

Of course, you can reach this conclusion in shorter ways, but that was the pathway that worked for me.