2

Everybody seems to rely happily on the set of Intel instructions on > 2010 CPUs to accelerate AES256 encryption.

This might be a too naive question but, being the exact algorithms an industrial secret, some independent experts must have checked at some point that the hardware-accelerated encryption works as intended, not faulty or with (get your tinfoil hats ready) backdoors. Some independent, reliable research institution perhaps? Who?

(Edit with additional info) I can see some benchmarks for instance, here: https://www.wolfssl.com/files/whitepapers/whitepaper_883_cyassl_aesni.pdf

which are not interesting at all. I would like to see a comparison of the same file encrypted with a software-only implementation and with hardware acceleration, although the test would have to be meticulously crafted to use the same salt and other technical stuff I am not sure of. The test should yield the same files, bit-per-bit compared.

Second edit: I see the words "industrial secret" here have triggered a misunderstanding with downvotes and even rude language. To make it clear, of course AES algorithms are open to everybody, as it is the algorithm to perform matrix multiplication in algebra books, however the way in which a CPU manufacturer manages to implement aggressive numerical optimizations, parallelization and such, to perform it faster than anyone else, is a secret. There are many examples of this (e.g. anyone can compute the inverse of a square root of a distance, but the initially secret way to implement it lead to the Doom engine in 1993 being much faster than anything known at the moment).

DannyNiu
  • 10,640
  • 2
  • 27
  • 64
Mephisto
  • 143
  • 3

3 Answers3

6

There do exist FIPS certified solutions that use AES-NI to perform AES. That means that NIST is satisfied that they have done sufficient tests to validate that the AES-NI implementation conforms to their standard (FIPS 197)

In addition, it is extremely common for AES-NI-based equipment to exchange encrypted messages with non-Intel-based AES equipment (based on either software or hardware). If the AES-NI implementation was even slightly wrong, people would have noticed that these message exchanges were not reliable.

poncho
  • 154,064
  • 12
  • 239
  • 382
2

The Q had been updated to reflect that the mistrust is in hardware implementation techniques rather than in the cipher algorithm. This is an understandable and widespread kind of mistrust, especially in the proprietary world (we've got another question on x86 CPUs in this regard, validity I leave to readers evaluation).

Again, the best we have to date are vender self-verifications - since as I said in the previous revision of this answer, none of them want to repeat errors like the Pentium FDIV bug. As well as 3rd-party "hardware hackers", like the ones found SPECTRE and Meltdown.

I wouldn't worry any vender offering "backdoored" implementations as much as there are side-channel design errors. Because correctness is easy to test - you said yourself, salt a password to derive the same key, and set every parameter the same (including encryption nonce and AEAD header, these "technical stuff" I can tell you), and run the different implementations and check the output. There isn't a public correctness check that I'm aware of - that'll be a too low-hanging fruit to fetch, but if it were there, we'd catch it very soon.

More interesting research direction would be side-channel data leaks, there are some I think.

For example, the abstract from this one:

Over the past few years, the microprocessor industry has introduced accelerated cryptographic capabilities through instruction set extensions. Although powerful and resistant to side-channel analysis such as cache and timing attacks, these instructions do not implicitly protect against power-based side-channel attacks, such as DPA. This paper provides a specific example with Intel’s AES-NI cryptographic instruction set extensions, detailing a DPA, along with results, showing two ways to extract AES keys by simply placing a magnetic field probe beside two capacitors on a motherboard hosting an Intel Core i7 Ivy Bridge micro-processor. Based on the insights of the DPA, methods are then presented on how to mitigate the leaks, in software, providing a dial for diverting the optimal amount of resources required for a prescribed security requirement.

Finally, I invited people in the previous revision not to fall for unsubstantiated technical conspiracy myth using language that some may consider rude. I apologize, that is not targeted at anyone participating on this page. Again, the critical thinking by OP demonstrate the bravery at questioning the established using reason and knowledge.

DannyNiu
  • 10,640
  • 2
  • 27
  • 64
2

As someone who has implemented an AES-128 core twice, once synchronous and asynchronous, we basically don't make mistakes because we have verification requirements that go above and beyond most industries. An example of this is my question here, where there was a logic leak, and I found it in verification.

Using SystemVerilog, I can easily compare any software behavior to hardware behavior, and we always have a formal verification sign-off, which means that we are basically as good as the test vectors.

In the case of the FDIV bug, that was someone who went outside of the verification procedure and removed some gates so that the netlists didn't match...which was also why it was easy to fix because it should have been caught in LVS.

b degnan
  • 5,110
  • 1
  • 27
  • 49