**What do you mean by "in-place"?**

A standard merge sort requires you to allocate a buffer equal in length to the input being sorted.

For example, if you wish to sort an array with 1000 elements, it is necessary to allocate an additional scratch array with 1000 elements as a space for merging elements before copying them back to the input array.

This is why merge sort has O(n) space complexity. An in-place sorting algorithm doesnâ€™t require allocating any additional memory for sorting. Several algorithms meet this requirement, including insertion sort and heap sort which have O(1) space complexity.

However, in-place merge sort has O(log n) space complexity. This is because, like quick sort, it is a recursive function which requires pushing elements onto the stack.

A naive implementation may be 2-3x slower than a standard merge sort. In-place merging only requires a minor increase in comparisons.

The worst effect on performance occurs due to the significant increase in reads and writes (there is lots of swapping going on).

Yes! As long as it is implemented correctly, in-place merge sort will preserve the stability of merge sort.

The truth is, you probably wouldnâ€™t. The only reason I can fathom is if youâ€™re working with embedded systems with very limited memory.

In the vast majority of cases, merge sort or Timsort would be a better choice. Otherwise, this is merely for those who are interested or graduate students who need a topic for their thesis.

You can attend first 3 classes for free, the total

The course time would be 8.00 PM(IST) for the first three classes

If you are interested to learn, then you can join the course by sending email to

| |||||