6

All the time I see cryptographic engineers praising the virtues of domain separation. Frequently, papers describing vulnerabilities in real-world protocols find domain separation problems. The core issue seems to be: you should really be using a different hash function in each context, but if you're careful you can "clone" a single hash function into multiple hash functions. (e.g.: https://eprint.iacr.org/2020/241)

Related, UC security does not work well with random oracles; if a security proof uses programmability or eavesdropping on a random oracle, then it can't compose with other protocols. But it seems like folklore that if you somehow restrict all the random oracles to be local, i.e., each protocol has its own random oracle, then the security proofs work just fine and are perfectly composable.

It seems to me that these two concepts are describing exactly the same thing. But I cannot find a single reference that spells this out, with some sort of proof or justification saying "If you do domain separation correctly, then you have UC security even with just local random oracles". Surely someone has done this before!? In "The Wonderful World of Global Random Oracles" they instead add weird programmability and observability properties to the random oracle to make it easier to prove security, but as far as I can tell this is still a completely global random oracle, and they have just sidestepped domain separation altogether.

Is there any literature explicitly proving that domain separation works as an instantiation of local random oracles in a UC framework?

Sam Jaques
  • 1,808
  • 9
  • 13

0 Answers0