23

On the Sorting Algorithms website, the following claim is made:

The ideal sorting algorithm would have the following properties:

  • Stable: Equal keys aren't reordered.
  • Operates in place, requiring $O(1)$ extra space.
  • Worst-case $O(n\cdot\lg(n))$ key comparisons.
  • Worst-case $O(n)$ swaps.
  • Adaptive: Speeds up to $O(n)$ when data is nearly sorted or when there are few unique keys.

There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.

My question is, is it true that

there is no [sorting] algorithm that has all of these properties

and if so, why? What is it about these properties that makes all of them simultaneously impossible to fulfill?

James Faulcon
  • 331
  • 1
  • 3

4 Answers4

6

WikiSort and GrailSort are two fairly recent algorithms that do in place, stable, worst case $O(n\ lg(n))$ key comparisons. Unfortunately I don't understand them well enough to know if they approach $O(n)$ swaps or are adaptive so I don't know whether they violate the fourth and fifth conditions you have.

From looking at the paper "Ratio based stable in-place merging", by Pok-Son Kim and Arne Kutzner linked to by the WikiSort GitHub page, Kim and Kutzner claim to have a 'merge' operation that is $O( m (\frac{n}{m} + 1))$ (WikiSort is a variant of Mergesort) but I'm not sure if that translates to WikiSort having $O(n)$ swaps. GrailSort is claimed to be faster (in the WikiSort GitHub page) so I could imagine that it's likely they both have worst case $O(n)$ swaps and are adaptive.

If anyone does manage to understand WikiSort and/or GrailSort I would appreciate them also answering my open question about it

user834
  • 849
  • 1
  • 8
  • 11
5

Dijkstra's smoothsort comes close, but isn't stable.

vonbrand
  • 14,204
  • 3
  • 42
  • 52
2

No known algorithms satisfy all of these properties. These properties became sought after as we developed more sorting algorithms. For example, bubble sort (arguably the most primitive sorting algorithm), was most likely non-stable the in the first implementation, but was designed to be stable as computer scientists sought to make it more efficient in later implementations. So, computer scientists most likely picked the best traits from the best algorithms, and as a result, you have brought up a list of all of these desirable traits. In reality, it is difficult to have the best of all worlds in anything. Not impossible, but possibly impossible with our current architectures.

P.S. Use Big-O ($O$) as an asymptotic upper-bound, and Big-Omega ($\Omega$) as an asymptotic lower-bound. Theta ($\Theta$) for the in between.

Siggy
  • 29
  • 1
2

(Even though this is an old question, I stumbled upon it and so might others.)

There indeed is an algorithm that satisfies (1) - (4) and the second half of (5), so comes very close to the above requirement. It is described in [1] and combines several tricks invented over the last decades.

[1]: Franceschini, G. Theory Comput Syst (2007) 40: 327. https://doi.org/10.1007/s00224-006-1311-1

Sebastian
  • 4,546
  • 2
  • 21
  • 14