ā” Quick Sort
You're organizing a library. Instead of comparing every book with every other book, you have a smarter strategy: Pick a reference book (the "pivot"), put smaller books on the left, larger books on the right. The pivot? It's already in its final spot! That's the power of Quick Sort!
The Smart Organizer
Imagine sorting mail into "before M" and "after M" piles based on a reference letter:
The Strategy
- Pick a pivot: Choose a reference element (e.g., last element)
- Partition: Move smaller elements left, larger elements right
- Pivot is sorted! It's now in its final position
- Recurse: Repeat for left and right subarrays
Key insight: Each partition puts ONE element in its final sorted position!
How Quick Sort Works
The Algorithm
- If array has ⤠1 element ā already sorted! (base case)
- Choose a pivot (commonly last element)
- Partition: rearrange so smaller elements are left, larger are right
- Pivot is now in its final sorted position!
- Recursively sort left subarray
- Recursively sort right subarray
See It in Action
Watch how Quick Sort picks a pivot, partitions around it, and recursively sorts:
Understanding the Process
- š” Yellow (Pivot): Selected pivot element
- š£ Purple (Comparing): Element being compared with pivot
- š Orange (Swapping): Elements being swapped
- š¢ Green (Sorted): Pivot in final position
- Each partition: O(n) work
- Average depth: log n levels
- Total: O(n) Ć O(log n) = O(n log n) average
The Code
Click Run to execute your code
Quick Quiz
-
Why does each partition put the pivot in its final
position?
Show answer
After partitioning, all elements to the left of the pivot are smaller, and all elements to the right are larger. This means the pivot is exactly where it belongs in the sorted array! No matter how we sort the left and right subarrays, the pivot won't move. -
What happens if we always pick the smallest element as
pivot?
Show answer
Disaster! We'd partition into [nothing] and [everything else]. That's n levels of O(n) work = O(n²). This is why pivot selection matters! Always picking the first or last element can be bad if the data is already sorted.
Pivot Selection Strategies
The pivot choice can make or break Quick Sort's performance:
Click Run to execute your code
| Strategy | Pros | Cons | Use When |
|---|---|---|---|
| Last Element | Simple, fast | O(n²) on sorted data! | Random data only |
| Middle Element | Better than last | Still can be bad | Unknown data |
| Random Element | Prevents worst case | Randomness overhead | Adversarial data |
| Median-of-Three | Best overall | Slightly more complex | Production use! |
Why O(n log n) Average?
Let's break down the complexity:
| Case | Time | Why |
|---|---|---|
| Best | O(n log n) | Pivot always splits evenly |
| Average | O(n log n) | Most pivots are reasonably good |
| Worst | O(n²) | Pivot always smallest/largest (sorted data!) |
| Space | O(log n) | Recursion stack depth |
ā ļø The Worst Case
Unlike Merge Sort, Quick Sort can degrade to O(n²)!
- When: Already sorted data with bad pivot choice
- Why: Each partition only reduces size by 1
- Solution: Use median-of-three or random pivot
Quick Sort vs Merge Sort
The big question: Which is better?
| Feature | Merge Sort | Quick Sort |
|---|---|---|
| Strategy | "Split evenly, merge carefully" | "Pick pivot, partition smartly" |
| Best Case | O(n log n) | O(n log n) |
| Average Case | O(n log n) | O(n log n) |
| Worst Case | O(n log n) ā | O(n²) ā ļø |
| Space | O(n) - needs extra array | O(log n) - recursion only ā |
| Stable | Yes ā | No |
| In-Place | No | Yes ā |
| Cache Performance | Good | Excellent ā |
| In Practice | "Steady and predictable" | "Usually faster!" ā |
The Verdict
Quick Sort wins in practice! Here's why:
- ā Better cache locality (works on nearby elements)
- ā In-place (no extra memory needed)
- ā Smaller constant factors
- ā With good pivot selection, worst case is rare
But Merge Sort wins when:
- ā You need guaranteed O(n log n)
- ā You need stability
- ā Data might be adversarial
When to Use Quick Sort
ā Use When:
- Need speed: Fastest in practice for random data
- Limited memory: In-place with O(log n) space
- Cache matters: Excellent cache performance
- Random data: Average case is very likely
- Can choose pivot: Median-of-three available
ā Avoid When:
- Need guaranteed performance: Can be O(n²)
- Need stability: Quick Sort is unstable
- Data might be sorted: Worst case likely
- Adversarial input: Someone might exploit worst case
šÆ Real-World Use
Quick Sort is everywhere!
- C++ std::sort(): Uses Intro Sort (Quick + Heap)
- Unix sort: Quick Sort variant
- Java Arrays.sort(): Dual-pivot Quick Sort for primitives
- Databases: In-memory sorting
Common Misconceptions
ā Misconception 1: "Quick Sort is always O(n log n)"
Reality: It's O(n²) in the worst case!
- Worst case: Already sorted data with bad pivot
- Example: [1,2,3,4,5] with last element as pivot
- Each partition: Only reduces size by 1
- Total: n levels à O(n) work = O(n²)
Solution: Use median-of-three or random pivot!
ā Misconception 2: "Quick Sort is always faster than Merge Sort"
Reality: It depends on the data and implementation!
- Random data: Quick Sort usually wins
- Sorted data: Merge Sort wins (if bad pivot)
- Small arrays: Insertion Sort wins!
- Linked lists: Merge Sort wins (no random access)
Best choice: Depends on your specific use case!
ā Misconception 3: "Quick Sort uses no extra space"
Reality: Recursion uses O(log n) stack space!
- Recursion depth: log n on average
- Each call: Stores local variables on stack
- Worst case: O(n) stack space (unbalanced)
- Still better: Than Merge Sort's O(n) array
Note: Can be made iterative with explicit stack!
ā Misconception 4: "Pivot selection doesn't matter much"
Reality: It's the difference between O(n log n) and O(n²)!
- Bad pivot: Always smallest/largest ā O(n²)
- Good pivot: Near median ā O(n log n)
- Last element: Terrible on sorted data
- Median-of-three: Almost always good
Real libraries: Use sophisticated pivot selection!
š” Key Takeaway
Quick Sort is fast in practice but not guaranteed:
- ā Average case: O(n log n) - usually very fast
- ā In-place: O(log n) space
- ā Cache-friendly: Works on nearby elements
- ā ļø Worst case: O(n²) - rare with good pivot
- ā ļø Unstable: Equal elements may swap
Understanding Quick Sort teaches you about trade-offs, pivot selection, and why "average case" matters!
Summary
The 3-Point Quick Sort Guide
- Strategy: Pick pivot, partition, recurse - each partition sorts ONE element
- Complexity: O(n log n) average, O(n²) worst - pivot choice matters!
- Trade-off: Speed and space efficiency vs guaranteed performance
Interview tip: Mention Quick Sort for in-place sorting with good average performance. Explain partition process and why pivot selection matters. Compare with Merge Sort!
What's Next?
You've completed Module 3: Sorting Algorithms! You now understand O(n²) sorts (Bubble, Selection, Insertion) and O(n log n) sorts (Merge, Quick). Next: Searching Algorithms and advanced data structures!
Enjoying these tutorials?