11

Let's say that Alice is the administrator of a group. For each message generated by a group member, Alice uses an administrator's private key ($sk$) to sign it, indicating that this message has been checked by her. One day, Bob takes over from Alice. Therefore, Alice sends Bob the $sk$ in a secure channel. But the security concern is that since $sk$ is not changed, Alice still has the ability to sign messages using the $sk$. So I wonder if there is any cryptography technology that can prevent Alice from using this $sk$ after it is sent to Bob.

More generally, this question can be described as follows: Is there a way to transfer the ability to sign from A to B while keeping the same public key and preventing A from signing after the transfer is complete? (If needed, it is okay to change $sk$ as long as other requirements are satisfied.)

Z. Chen
  • 185
  • 1
  • 6

3 Answers3

17

In PKIs, this situation is usually accomplished by ensuring Alice doesn't actually know the private key. The key is generated and stored in a hardware security module (HSM), which performs operations, such as key generation, encryption and signing, without disclosing the actual key externally.

When Bob takes over, Alice physically transfers the HSM to Bob, giving assurance that Alice no longer controls the keys. This avoids any changes in the public key, which may be difficult to update, such as when it is used by large numbers of embedded/IoT devices.

Further, HSMs implement secret sharing or multiple user access control such that a certain number of passwords or hardware tokens must be presented for an operation to take place, so these tokens may be physically handed over or reprogrammed, e.g. as employees leave the company.

The security of this scheme depends on the security of the HSM to cryptographic, logical, and physical attacks.

user71659
  • 308
  • 1
  • 9
13

Sign a message revoking Alice's old public key and certifying Bob's newly generated key. Include a timestamp from some timestamping athority and related metadata in the message.

There's no way for the public key to stay the same, unless some CA issues certificates for Alice and Bob and people trust CA's public key and certificates issued by this CA.


Incorporating some relevant comments.

There are some problem with participants rejecting future Bob's messages when the revocation message and the new public key are somehow concealed by Alice. There's also the problem that Alice can fake back-dated messages to make them think it's a valid old message.

For the second part, a timestamping authority need to be established. For the first part, I can't think of any solution. Maybe we should require all messages be signed by at least some minimum number of required people (e.g. 5 out of 7 sign messages and revocation notifications).

DannyNiu
  • 10,640
  • 2
  • 27
  • 64
5

If an administrative message is identified just by being signed with a specific private key, you can't prevent any entity having a copy of that key from creating valid administrative messages. If the system is not yet deployed (i.e. Alice doesn't know the private key for administrative messages yet), you might be able to deploy a trusted "signing service" that is the only entity that can sign valid administrative messages. Alice and Bob can use separate authentication at that signing service (different API keys, different OAuth tokens, different private keys) in a way that is appropriate for your system. As this authentication of Alice or Bob is only used at the signing service, the recipients don't need to be aware of who signed the message. This basically is the same cryptographic idea as the HSM, but having the "security service" in a (cloud) service instead of a hardware module.

In the long run, on the other hand, systems that do not allow replacement of a "master signing key" often get in serious trouble as soon as the key is either leaked or cryptographic advances require a longer key length.

For systems that are expected to be deployed in a large scale, I highly recommend to have a key replacement plan. A current example of a system that likely can recover from a breach is Intel's SGX enclave system. Due to the Æpic Leak bug, the master key of Intels attestation system leaked, so everyone can fake attestation messages that the remote system would run uncompromised code. Intel had designed a procedure called "TCB recovery" into their code attestation system that allows to enroll new keys that can be trusted again. Without a procedure like that, SGX would have been completely useless by now.