English Walking in Code

Java PriorityQueue: How It's Implemented with a Binary Heap

Deep dive into Java's PriorityQueue implementation: understand siftUp, siftDown operations with interactive visualizations. Learn how the JDK uses binary heaps for efficient priority queue operations.

#java #data-structure #heap #priority-queue #visualization #jdk-source-code #algorithm

Why Study PriorityQueue?

PriorityQueue is one of Java’s most powerful yet often misunderstood collections.

Unlike ArrayList or LinkedList, it doesn’t just store elements—it automatically maintains order so you can always access the minimum (or maximum) element in O(1) time.

This makes it the go-to choice for:

  • Task scheduling — always execute the highest priority task first
  • Graph algorithms — Dijkstra’s shortest path, Prim’s MST
  • Stream processing — find top K elements from millions of records
  • Event simulation — process events in chronological order

But how does it achieve O(log n) insertion and removal? The answer lies in its underlying data structure: the binary heap.

🎯 What You’ll Learn

TopicDescription
Heap fundamentalsHow arrays become trees
siftUpHow insertion maintains heap property
siftDownHow removal maintains heap property
JDK source codeActual implementation details
Interactive demoSee operations step by step

📖 Prerequisite: This article assumes basic knowledge of heaps. If you’re new to heaps, check out our Heap Sort Algorithm Guide first!


Quick Heap Recap

A binary heap is a complete binary tree stored in an array. Java’s PriorityQueue uses a min-heap by default, meaning:

Parent ≤ Children (for all nodes)

Array-to-Tree Mapping

For any element at index i:

RelationshipFormula
Parent(i - 1) / 2
Left child2 * i + 1
Right child2 * i + 2

Example: Array [2, 4, 3, 7, 9, 5, 8] represents:

            [2]           ← index 0 (root = minimum)
           /   \
         [4]   [3]        ← indices 1, 2
        /  \   /  \
      [7] [9] [5] [8]     ← indices 3, 4, 5, 6

The minimum is always at index 0. That’s why peek() is O(1)!


JDK Source Structure

Let’s peek inside the actual JDK source code. Here’s the core structure:

public class PriorityQueue<E> extends AbstractQueue<E> {
    
    // The heap array - this is where elements live
    transient Object[] queue;
    
    // Number of elements in the queue
    int size;
    
    // Optional comparator for custom ordering
    private final Comparator<? super E> comparator;
    
    // Default initial capacity
    private static final int DEFAULT_INITIAL_CAPACITY = 11;
}

Key Design Decisions

  1. Uses Object[] — Generic arrays can’t be created directly in Java, so it uses Object[] with casts

  2. Default capacity is 11 — An oddly specific number that provides decent initial space

  3. Optional comparator — If null, elements must implement Comparable

  4. Dynamic sizing — Array grows automatically when full


siftUp: Insertion Magic

When you call offer(element), the new element is added at the end of the array. But this might violate the heap property!

siftUp fixes this by “bubbling up” the element until it finds its correct position.

The Algorithm

1. Place new element at the end (index = size)
2. Compare with parent:
   - If element < parent → swap and repeat
   - If element ≥ parent → done!

JDK Implementation

Loading...

Visual Walkthrough

Let’s insert 1 into heap [2, 4, 3, 7, 9, 5, 8]:

Step 1: Add at end
[2, 4, 3, 7, 9, 5, 8, 1]
                      ↑ new

Step 2: Compare with parent (7)
1 < 7 → Swap!
[2, 4, 3, 1, 9, 5, 8, 7]


Step 3: Compare with parent (4)  
1 < 4 → Swap!
[2, 1, 3, 4, 9, 5, 8, 7]


Step 4: Compare with parent (2)
1 < 2 → Swap!
[1, 2, 3, 4, 9, 5, 8, 7]
 ↑ new root!

Done! New minimum is 1.

Why siftUp is Correct (Proof)

This is the key question: why is the result still a valid heap after siftUp?

Before insertion, we have a valid min-heap where every parent ≤ its children.

After placing the new element at the end, there’s only ONE place where the heap property might be violated: between the new element and its parent.

Why? Because:

  1. The new element is at a leaf position — it has no children
  2. All other parent-child relationships remain unchanged
  3. The only new relationship is: new element ↔ its parent

siftUp fixes exactly this one potential violation:

If new_element >= parent → No violation, done! ✅

If new_element < parent → Swap them, then check grandparent

After each swap, why is the heap property maintained at this level?

Before swap (valid min-heap):     After swap:
    [P]                               [N]
   /   \                             /   \
 [N]   [S]            →           [P]   [S]

In the original heap, P ≤ S (parent ≤ sibling, that’s min-heap property).

After swap:

  • Is N ≤ P? YES — we swapped specifically because N < P
  • Is N ≤ S? YES — because N < P and P ≤ S, therefore N < P ≤ S

The heap property at this level is restored!

What about P’s new position?

P moves to where N was (the last position, which is a leaf). Since it’s a leaf, P has no children — nothing to violate!

The invariant: We only disturb elements along the path from leaf to root. Sibling subtrees are completely untouched.

💡 Key Insight: siftUp is like inserting into a sorted path. The new element “bubbles up” until it finds its sorted position, pushing larger elements down along the way.

Why siftUp is Efficient

  • Maximum comparisons: log₂(n) — height of tree
  • No extra space: works in-place
  • Short-circuits: stops as soon as heap property is satisfied

siftDown: Removal Magic

When you call poll(), the root (minimum) is removed. But we need to fill the gap!

siftDown handles this by:

  1. Moving the last element to the root
  2. “Sinking” it down until heap property is restored

The Algorithm

1. Save root value (to return)
2. Move last element to root
3. Compare with children:
   - Find the smaller child
   - If element > smaller child → swap and repeat
   - If element ≤ both children → done!

JDK Implementation

Loading...

Visual Walkthrough

Let’s poll from heap [2, 4, 3, 7, 9, 5, 8]:

Step 1: Remove root, move last to root
[8, 4, 3, 7, 9, 5]  (returning 2)
 ↑ new root

Step 2: Compare with children (4, 3)
Smaller child = 3
8 > 3 → Swap!
[3, 4, 8, 7, 9, 5]


Step 3: Compare with children (5)
8 > 5 → Swap!  
[3, 4, 5, 7, 9, 8]


Step 4: No more children
Done! 8 is now a leaf.

Why siftDown is Correct (Proof)

Just like siftUp, let’s answer the key question: why is the result still a valid heap after siftDown?

After poll(), we:

  1. Remove the root (minimum)
  2. Move the last element to the root position

Where might the heap property be violated?

The two subtrees (left and right children of root) are still valid heaps — we only touched the root, not them!

After moving last element to root:
        [L]  ← last element (might be large!)
       /   \
     [4]   [3]  ← still valid heaps
     / \   / \
    ...   ...

The ONLY potential violation is: the new root might be > its children.

siftDown fixes exactly this violation by sinking the element down:

If element <= smaller_child → No violation, done! ✅

If element > smaller_child → Swap with smaller child, repeat

Why swap with the SMALLER child specifically?

This is crucial! Consider:

       [8]         Which child to swap?
      /   \
    [4]   [3]      Smaller = 3

If we swapped with the larger child (4):

       [4]         ❌ WRONG!
      /   \
    [8]   [3]      Now 4 > 3, heap property violated!

If we swap with the smaller child (3):

       [3]         ✅ CORRECT!
      /   \
    [4]   [8]      3 < 4, heap property maintained!

The smaller child is guaranteed to be ≤ its sibling, so promoting it maintains the heap property at this level.

After swapping, why do we only need to continue downward?

       [3]          ← This level is now valid
      /   \
    [4]   [8]       ← 8 might violate with ITS children
          / \
        [5] [9]     ← But this subtree was untouched
  • The path above our current position is fixed
  • The sibling subtree (left side) was never touched
  • We only need to fix the path downward from where we placed the element

💡 Key Insight: siftDown is the mirror of siftUp. Instead of bubbling up a potentially-too-small element, we’re sinking down a potentially-too-large element.

Optimization: Finding Smaller Child

The JDK uses a clever check:

if (right < n && c.compareTo(es[right]) > 0)
    c = es[child = right];

This finds the smaller child in one comparison, avoiding the need to compare both children with the parent separately.


Interactive Visualization

See siftUp and siftDown in action! Use the controls to:

  • offer() — Add a new element and watch it bubble up
  • poll() — Remove the minimum and watch the replacement sink down

Java PriorityQueue - Sift Up & Sift Down Demo

Visualize how offer() and poll() operations work internally

Step 1/1
Init

Ready. Use offer() to add elements or poll() to remove the minimum.

Heap Tree Structure

2[0]4[1]3[2]7[3]9[4]5[5]8[6]

Heap Array (queue)

2
[0]
4
[1]
3
[2]
7
[3]
9
[4]
5
[5]
8
[6]
Root (min)
Sifting up
Sifting down
Swapping
New element

Try these experiments:

  1. Offer several small numbers (1, 2, 3) and watch them compete for the root
  2. Poll repeatedly and observe how the heap restructures
  3. Offer a very large number and notice it stays near the bottom

offer() and poll() Operations

Now let’s see the complete offer() and poll() implementations:

Loading...

Operation Summary

MethodDescriptionTime Complexity
offer(e)Add element, sift upO(log n)
poll()Remove min, sift downO(log n)
peek()Return min without removingO(1)
size()Return element countO(1)
contains(e)Check if element existsO(n)
remove(e)Remove specific elementO(n)

⚠️ Note: contains() and remove(Object) are O(n) because they require scanning the entire heap. Use these sparingly!


Dynamic Array Growth

When the internal array is full, PriorityQueue grows automatically:

private void grow(int minCapacity) {
    int oldCapacity = queue.length;
    // Double size if small, grow by 50% if large
    int newCapacity = oldCapacity + (
        (oldCapacity < 64) 
            ? (oldCapacity + 2) 
            : (oldCapacity >> 1)
    );
    // Check for overflow
    if (newCapacity - MAX_ARRAY_SIZE > 0)
        newCapacity = hugeCapacity(minCapacity);
    queue = Arrays.copyOf(queue, newCapacity);
}

Growth Strategy

Current SizeGrowth Rate
< 64Double + 2
≥ 6450% increase

This balances memory efficiency for small queues with performance for large ones.


Time Complexity Analysis

Why O(log n)?

Both siftUp and siftDown traverse at most one path from root to leaf (or vice versa).

In a complete binary tree with n nodes:

  • Height = ⌊log₂(n)⌋
  • Maximum operations = height

Amortized Analysis for offer()

OperationWorst CaseAmortized
Array accessO(1)O(1)
Sift upO(log n)O(log n)
Array growthO(n)O(1)*

*Array growth is rare and amortized over many insertions.

Comparison with Other Data Structures

OperationPriorityQueueSorted ArraySorted List
InsertO(log n)O(n)O(n)
Find minO(1)O(1)O(1)
Remove minO(log n)O(n)O(1)
Build from n elementsO(n)O(n log n)O(n log n)

The heap wins for most priority queue use cases!


Common Pitfalls

1. Null Elements Not Allowed

PriorityQueue<String> pq = new PriorityQueue<>();
pq.offer(null);  // ❌ NullPointerException!

2. Iterator Order ≠ Priority Order

PriorityQueue<Integer> pq = new PriorityQueue<>(Arrays.asList(5, 1, 3));

// ❌ Wrong: Iterator doesn't give priority order!
for (Integer i : pq) {
    System.out.println(i);  // Might print: 1, 5, 3
}

// ✅ Correct: Use poll() for priority order
while (!pq.isEmpty()) {
    System.out.println(pq.poll());  // Prints: 1, 3, 5
}

3. Modifying Elements After Insertion

class Task {
    int priority;
    // ...
}

PriorityQueue<Task> pq = new PriorityQueue<>(
    Comparator.comparing(t -> t.priority)
);

Task task = new Task(5);
pq.offer(task);

task.priority = 1;  // ❌ Heap doesn't know about this change!
// pq.poll() might not return this task first

Solution: Remove, modify, re-insert:

pq.remove(task);
task.priority = 1;
pq.offer(task);

4. Max-Heap Confusion

By default, PriorityQueue is a min-heap. For max-heap:

// Option 1: Reverse comparator
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(
    Collections.reverseOrder()
);

// Option 2: Custom comparator
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(
    (a, b) -> b - a
);


Key Takeaways

Core Concepts

ConceptKey Point
Data structureBinary min-heap in an array
siftUpBubble new element up after insertion
siftDownSink replacement down after removal
Index formulasparent=(k-1)/2, left=2k+1, right=2k+2

Performance

OperationComplexity
offer() / poll()O(log n)
peek()O(1)
contains() / remove(Object)O(n)

Best Practices

  1. ✅ Use for priority-based processing
  2. ✅ Prefer poll() over iteration for priority order
  3. ✅ Use custom Comparator for complex ordering
  4. ❌ Don’t modify elements after insertion
  5. ❌ Don’t expect iterator to give priority order

💡 The Bottom Line: PriorityQueue achieves O(log n) operations through elegant heap algorithms. Understanding siftUp and siftDown unlocks not just this data structure, but a whole family of heap-based algorithms.