0

This page on Wikipedia states

"Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess."

This might make sense, but are there any examples of widely used encryption software doing such a thing?

daniel
  • 912
  • 5
  • 15

2 Answers2

3

Since speaking about any software is not showing anything, I'll instead on how they do it and where they do it.

This "obfuscation" is very often done when passwords come into play. There are plenty of good passwords hashing functions, that slow attacker down by "obfuscating" it, then brute-force has to use that same "obfuscation" which is designed to be slow. This is done because users will always generate weak passwords, it isn't something we can fix. But this is still futile attempt at fixing something that is inherently broken (users will always devise poor and short passwords). It is far more effective to make passwords twice as long than making "password check" take twice as long. Increasing key by ONE BIT doubles security. Increasing check time by factor of 2 doubles security.

Apart from passwords, we almost never use such "obfuscations" because they are not effective at stopping attacker. Making key double length will increase security by unimaginable factor, doubling encryption will only double security!

If I were inventing things instead of asking questions, I would have my software do a block cipher on the plain data and append the key to the newly encrypted data. I'd have it do this at least twice, now the data would look like uniformly distributed random data (because both the key and encrypted message are), and the attacker would have to number crunch at least 1 full decryption of the message for each brute force attempt.

This would not help you. I'd simply take twice as long to decrypt your message. This is not a lot, as computing power increases very fast so at worst I'd wait a year to break your scheme. On the flip side, you would have to take twice as long to encrypt that message. We end up being hit same.

History likes to repeat itself. Germans tried to fix enigma by using more and more complex encoding schemes, it didn't help. RC4 was "fixed" by removing first X bytes, it didn't help in long run (everything before fix was compromised, everything after fix was safe for some years). It's easy to fall into pitfall of "I'll just change it a bit and it will be secure again", truth is we don't know how long it will be secure, so it's best to leave whatever is broken behind, making better, more effective algorithms.

but they still design systems to be strong against brute force attacks, that is why people keep making keys longer.

Problem is, you are not even making key longer. Key only makes sense if increasing key by X will increase security by larger margin than X. You instead make it so that it takes two times longer for you and attacker. It doesn't make difference in long run for attacker. RSA is special case, because RSA inherently has better factoring algorithms than bruteforce, and we move away from RSA now because it's not efficient enough. If RSA got cracked however, making better algorithms, we would probably totally move away from RSA, because we couldn't make key large enough for attacker to be powerless. RSA is also far different from symmetric ciphers, because of efficiency, breakage method etc. so comparing two isn't correct.

axapaxa
  • 2,970
  • 12
  • 21
0

Obfuscation may just be an old fashioned way of describing different concepts that expanded on the idea of pre processing the plain text in some way, and are used in current cryptography systems such as All-or-nothing transforms, and Initialization Vectors in AES, given its description is a:

block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times

The list of ciphers that would have benefited by randomizing the plain text before using the rest of the encryption system can be found by looking for any that were broken with the assistance of chosen or known plain text attacks such as the PKZIP stream cipher.

The suggested method is probably flawed and may actually weaken the system by sending a key and cipher as the plain text as this may leak information, if its related to this answer.

So in summary an extra step is not needed for encryption like AES because the Substitution-permutation network randomizes the plain-text and is also expensive for the attacker to repeat for every key it checks, earlier encryption that is vulnerable to known plain text attacks could have benefited from such a step.

daniel
  • 912
  • 5
  • 15