What do you mean by "in-place"?
A standard merge sort requires you to allocate a buffer equal in length to the input being sorted.
For example, if you wish to sort an array with 1000 elements, it is necessary to allocate an additional scratch array with 1000 elements as a space for merging elements before copying them back to the input array.
This is why merge sort has O(n) space complexity. An in-place sorting algorithm doesn’t require allocating any additional memory for sorting. Several algorithms meet this requirement, including insertion sort and heap sort which have O(1) space complexity.
However, in-place merge sort has O(log n) space complexity. This is because, like quick sort, it is a recursive function which requires pushing elements onto the stack.
A naive implementation may be 2-3x slower than a standard merge sort. In-place merging only requires a minor increase in comparisons.
The worst effect on performance occurs due to the significant increase in reads and writes (there is lots of swapping going on).
Yes! As long as it is implemented correctly, in-place merge sort will preserve the stability of merge sort.
The truth is, you probably wouldn’t. The only reason I can fathom is if you’re working with embedded systems with very limited memory.
In the vast majority of cases, merge sort or Timsort would be a better choice. Otherwise, this is merely for those who are interested or graduate students who need a topic for their thesis.
This article is written by Pavan, who is serving notice period in an MNC, Bangalore. He thought, let the articles speak rather than his image. He is also the same person who created the reporter for Protractor Jasmine