Goldreich and Ostrovsky show that any ORAM algorithm must have bandwidth cost $\Omega(logN)$, where $N$ is the total number of blocks outsourced. This is in Theorem C of this paper. But they didn't give any proof for this Theorem. Is there other papers that have proved this theorem?
1 Answers
See Theorem 6, page 38-39..
Also, this lower bound ignores any ability of the "RAM" to perform computation. Typically, there are two application scenarios for ORAM:
1) A literal processor communicating along a literal CPU bus to a literal stick of RAM
2) A client communicating over the internet to a cloud server
In the latter case, it makes sense for the cloud server to provide computational resources to 'speed up' the oblivious simulation. Two works have shown how to achieve $O(1)$ bandwidth overhead (pushing the $\Omega(log(N))$ requisite overhead onto the server's computation instead):
1) http://eprint.iacr.org/2014/153.pdf
2) http://eprint.iacr.org/2015/005.pdf
Also, very recent work based on program obfuscation allows you to implement a "one-shot ORAM," i.e. ORAM with a single round of communication and asymptotically succinct space/time for the server:
- 580
- 3
- 10