2

The Path-ORAM construction assumes that the data outsourced into the ORAM scheme is split into blocks of size $\Omega(\log^2{n})$. With respect to this assumption the authors state that their storage overhead is $20N$ and their bandwidth overhead is $\mathcal{O}(\log{N})$.

How would those those performance metrics be influenced if I do not make this assumption about the block size, e.g. I assume that the blocks are of constant size (and in particular independent of N). What would be the asymptotic/practical server-side storage overhead and what would be the bandwidth overhead in this case?

Cryptonaut
  • 1,106
  • 7
  • 19

1 Answers1

1

The smallest block size that is reasonable to consider is at least $\log N$, since the headers of blocks in server storage need to contain their virtual address and position (according to the position map). The idea behind path ORAM (and most tree-based ORAMs) is to maintain the position map using a recursive ORAM. The "data units" in the position map are of size $\log N$ (a position in $[N]$), therefore the recursion will not work for a block size $O(\log N)$ since in such case you will not reduce the number of blocks in the recursion. We now understand that for the scheme to actually work, we need a block size $k\log N$ for non constant $k$. For such a block size, the overhead will be $\log N + (\log^2 N)/k$ (I know this doesn't align with the original paper, but a better parameterization of the recursion with the same construction gives this overhead. See "Circuit ORAM" paper for more details).

T.elMorr
  • 67
  • 2