11

Presently 160 bits of hash block width seems to provide adequate security against brute force attacks. The recent developments concerning SHA-1 have reduced the effort to force collisions by 5 orders of magnitude according to the latest Wikipedia edit.

Other cryptographic primitives have been evolved to fix issues that were similarly found, such as:-

  • RSA -> increasing bit count
  • RC4 -> RC4A -> Spritz
  • Whirlpool0 -> WhirlpoolT -> Whirlpool

I'm specifically using the term fix, to mean keeping the SHA-1 160 bit essentials and making internal changes /improvements. So some (but not limited to) changes might be:-

  • amending the round count with perhaps double rounds to tie in with the existing key schedule as 80 x 2 instead of 80 x 1
  • changes to key schedule to increase rounds directly
  • something akin to internalising SHA-1(SHA-1(message))
  • additional bit wise operations

Hopefully you get the idea. Why can't we do this so that it forms a plug in upgrade whereby the existing block size can be kept? My intention is to easily prolong the life of SHA-1 in existing code and data bases without huge and far reaching programmatic changes. I realise that 160 bits will someday become insecure requiring it's ultimate replacement by a wider hash.

Paul Uszak
  • 15,905
  • 2
  • 32
  • 83

2 Answers2

27

I have 3 answers: We can't fix SHA-1, we shouldn't fix SHA-1 and we already did fix SHA-1. SHA-1 is a hash standard; many different people can and have implemented it and they all get the same results. SHA-1 is broken. We have to replace it and convince everybody to move on to a new standard. A fixed SHA-1 wouldn't be SHA-1. We shouldn't try a minimal fix; we should build the best (fastest, most secure) hash algorithm we can. A minimal fix will keep us with the problematic basic structure of SHA-1. If are switching, we want something which will be secure for many years to come. And be as efficient as possible.

That said, we already fixed it. It's called hardened SHA-1; it is immune to the known and similar collision attacks and it is even backwards compatible with SHA-1, sort of. It detects patterns which will occur in a collision attack and are very rare in random data. Hardened SHA-1 is identical to SHA-1 on almost any input, yet won't be the same and an infinitesimally small portion of inputs we see in collision attacks of the class published.

However though there are no currently known attacks on hardened SHA-1 and it isn't much slower. Nobody has the illusion SHA-1 has competitive security with SHA-2 or SHA-3. Even if you need to truncate to 160 bits, these are probably safer options. If you must have backwards compatibility as a quick fix, Hardened SHA-1 is a great trick but you should make plans to switch to SHA-3

Toby Speight
  • 167
  • 8
Meir Maor
  • 12,053
  • 1
  • 24
  • 55
23

We can fix SHA-1 but why?

SHA-1 is broken. We cannot fix it without modifying result (so compability won't be preserved). We can make changes that will fix it... for now, and also will make it inefficient. What are gains? That perhaps implementation will be somewhat easier... That is not much for fixing something that has only 160bit security and something that will be very inefficient.

Presently 160 bits of hash block width seems to provide adequate security against brute force attacks.

Presently is important word here. We assume it might not be true in future.

Other cryptographic primitives have been evolved to fix issues that were similarly found

RSA is wrong example here, since we don't change anything other than we assumed will differ. There are algorithms with tweakable security parameters, but SHA-1 is not one of them.

Spritz and Whirlpool were fixed, but keep in mind that neither is mainstream algorithm, unlike SHA-2 and SHA-3. This is because they are inefficient and not well studied.

Also keep in mind how RC4 was broken. At first key could be recovered from first 1024 bytes of cipherstream, so people started rejecting it. It worked for some time except it was broken more severely after that. Why would we patch insecure algorithm to hopefully make it secure? Better start from scratch learning from previous mistakes.

There are some attempts to make algorithms future-proof by making security parameters tweakable (SHA-3/Keccak, Salsa/Chacha), but I can see two major drawbacks of such thing (but maybe in future we can use their properties to fix them if they get broken):

  • If algorithm is broken, then probably we have to increase security parameters over efficiency, because original construction has diffusion problems etc.
  • It's hard to make efficient implementation for very tweakable algorithms.
axapaxa
  • 2,970
  • 12
  • 21