4

Notations: We follow the convention in the UC framework. We use $\mathcal{A}$ to denote the adversary, $\mathcal{P}$ to denote a party in the model.


We focus on two types of corruption in the UC framework, which we rephrase now.

  • Byzantine corruption: $\mathcal{A}$ takes the full control of $\mathcal{P}$.
  • Passive corruption: $\mathcal{A}$ sees the internal state of $\mathcal{P}$.

My question is:

Is passive corruption actually equivalent to Byzantine corruption?


Now I explain the reason behind my question.

In the UC framework, $\mathcal{A}$ controls the network, if $\mathcal{A}$ only has the ability to see the internal state of $\mathcal{P}$, $\mathcal{A}$ can do the following to approximate "controlling $\mathcal{P}$" in the real-world model:

  1. $\mathcal{A}$ isolates $\mathcal{P}$ from the network.
  2. $\mathcal{A}$ copies the internal state of $\mathcal{P}$ and launches a virtual machine $\widetilde{\mathcal{P}}$.
  3. $\widetilde{\mathcal{P}}$ has the same internal state as $\mathcal{P}$ at the time of corruption. But, $\mathcal{A}$ takes the full control of $\widetilde{\mathcal{P}}$.
  4. $\widetilde{\mathcal{P}}$ pretends to be $\mathcal{P}$ in the rest of the execution.

Some readers may suspect whether 4 is possible in the UC framework with authenticated channels. Now I explain my concerns.


In the UC framework, it is non-trivial to know who actually sends the message. We need to rely on "authenticated channels". We use $\mathcal{F}_\mathsf{auth}$ to denote the ideal functionality for such a communication channel.

Via $\mathcal{F}_\mathsf{auth}$, parties in the protocol can make sure who sends the message. If $\mathcal{P}$ is Byzantinely corrupted by $\mathcal{A}$, $\mathcal{A}$ can send messages in the name of $\mathcal{P}$.

But, what if $\mathcal{A}$ passively corrupt $\mathcal{P}$?

  • The definition of $\mathcal{F}_\mathsf{auth}$ allows $\mathcal{A}$ to change the messages sending out from $\mathcal{P}$ if $\mathcal{A}$ corrupts $\mathcal{P}$. It is not clear whether it differs in the case of passive corruption.
  • Existing realizations of $\mathcal{F}_\mathsf{auth}$ rely on some secrets in $\mathcal{P}$, such as the signing key. If $\mathcal{A}$ just sees the internal state of $\mathcal{P}$, which includes such secrets, then $\mathcal{A}$ can pretend to be $\mathcal{P}$ in these realizations.

Or in other words, passive corruption does not reduce $\mathcal{A}$'s ability to impersonate $\mathcal{P}$.


Back to my question:

Is passive corruption actually equivalent to Byzantine corruption?

And a followed-up question.

How should I model something similar to passive corruption?

Thanks for your reading.


Let me add an example to assist explanation.

Consider that I can passively corrupt the webserver of CVS. I can steal their TLS/SSL certificate private keys in the CVS webserver's internal state.

Then, I make a man-in-the-middle attack to a specific client, and if I also control the network, I can mimic CVS website to this client and display wrong pharmacy records.


Reference:

  1. Canetti's paper on the UC framework. "Universally Composable Security: A New Paradigm for Cryptographic Protocols". https://eprint.iacr.org/2000/067.pdf
  2. Canetti's paper on realizing $\mathcal{F}_\mathsf{auth}$ using $\mathcal{F}_\mathsf{sig}$. "Universally Composable Signature, Certification, and Authentication". https://eprint.iacr.org/2003/239.pdf
  3. Canetti-Shahaf-Vald's paper on realizing $\mathcal{F}_\mathsf{auth}$ with a global PKI (a bulletin-board certificate authority) also using $\mathcal{F}_\mathsf{sig}$. "Universally Composable Authentication and Key-exchange with Global PKI". https://eprint.iacr.org/2014/432.pdf
  4. Backes-Pfitzmann-Waidner's paper on realizing $\mathcal{F}_\mathsf{sig}$ with a public-key signing scheme. "A Universally Composable Cryptographic Library". https://eprint.iacr.org/2003/015.pdf
Weikeng Chen
  • 564
  • 3
  • 13

2 Answers2

2

Is passive corruption actually equivalent to Byzantine corruption?

The answer is clearly no, otherwise cryptographers wouldn't spend so much time on developing protocols for active security.

Let me add an example to assist explanation.

Consider that I can passively corrupt the webserver of CVS. I can steal their TLS/SSL certificate private keys in the CVS webserver's internal state.

Then, I make a man-in-the-middle attack to a specific client, and if I also control the network, I can mimic CVS website to this client and display wrong pharmacy records.

The man-in-the-middle attack you specify is an active attack. Yes, in real-life, an attacker that has the internal state of the webserver can get their TLS/SSL private key and then could cause all sort of trouble. But, the whole point of the passive model is that we assume that the attacker does not do this. The only thing they do is record plaintext messages/internal state of the parties they have corrupted.

Why do we make this assumption? To make our lives easier, of course. We have to start somewhere, so we start with a fairly simple adversary model. It does not assume a completely trusted third party, yet we can still develop secure protocols. That is a win. Whether that win translates into the real world is another question. There have been papers (I'm looking, I'll update when found) that use passively secure protocols and argue that this is okay since, because of the application, there are other mechanisms in place that will force parties to be honest.

So, if you are worried about active attackers you should use protocols that are secure against active attackers. Passively secure protocols will break royally if the adversary is actually active.

mikeazo
  • 39,117
  • 9
  • 118
  • 183
0

It seems the solution is to separate sessions and processes. The signing oracle/module is on the OS level. When a session is corrupted, the signing key used for authentication is still kept secret.

Further reading can be found in the following paper by Canetti, Shahaf, and Vald. https://eprint.iacr.org/2014/432.pdf

Weikeng Chen
  • 564
  • 3
  • 13