LFU Cache - Complete Guide with Interactive Visualization & LRU Comparison
Master the LFU (Least Frequently Used) Cache data structure with interactive visualizations. Learn O(1) get/put operations using HashMap + frequency buckets. Compare LFU vs LRU caching strategies with real-world examples.
The LFU (Least Frequently Used) Cache is an advanced caching strategy that tracks how often each item is accessed. Unlike LRU Cache which evicts based on recency, LFU evicts the least frequently accessed items first.
Why LFU Matters
LFU is preferred in scenarios where access frequency matters more than recency:
| Use Case | Why LFU Works Better |
|---|---|
| CDN caching | Popular content stays cached even if not accessed recently |
| Database query caching | Frequently-run queries remain cached |
| DNS caching | Popular domains stay in cache |
| CPU cache prefetching | Frequently accessed memory addresses prioritized |
LFU vs LRU: The Key Difference
Consider this scenario:
Cache capacity: 2
Operations: put(1,A), get(1), get(1), get(1), put(2,B), put(3,C)
LRU behavior (based on “recency”):
put(1,A): [1:A]
get(1) x3: [1:A] ← key 1 is most recent
put(2,B): [2:B, 1:A] ← key 2 becomes most recent, key 1 becomes older
put(3,C): Evicts key 1! ← because key 1 was accessed longest ago
Result: [3:C, 2:B]
LFU behavior (based on “frequency”):
put(1,A): [1:A(freq=1)]
get(1) x3: [1:A(freq=4)] ← key 1 accessed 4 times
put(2,B): [1:A(freq=4), 2:B(freq=1)]
put(3,C): Evicts key 2! ← because key 2 has lowest frequency (freq=1)
Result: [1:A, 3:C]
The core difference is now crystal clear:
- LRU evicts key 1: Even though it was accessed 4 times, because key 2 was “accessed last”
- LFU keeps key 1: Because its access frequency (4 times) is much higher than key 2 (1 time)
💡 Simple rule: LRU only cares about “when was the last access”, LFU cares about “how many times in total”
Core Intuition: HashMap + Frequency Buckets
LFU requires tracking:
- O(1) lookup - HashMap from key to node
- O(1) frequency tracking - Each node stores its frequency
- O(1) eviction - Find the least frequent, and among those, the least recently used
┌─────────────────────────────────────────────────────────────────────────┐
│ keyMap │
│ key → Node pointer (O(1) lookup) │
│ │
│ ┌───┐ ┌───┐ ┌───┐ ┌───┐ │
│ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │
│ └─┬─┘ └─┬─┘ └─┬─┘ └─┬─┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ freqMap │ │
│ │ freq → DoublyLinkedList (ordered by LRU within each frequency) │ │
│ │ │ │
│ │ freq=1: [4:D] ⟷ [2:B] ← minFreq (evict from here) │ │
│ │ ↑ ↑ │ │
│ │ MRU LRU │ │
│ │ │ │
│ │ freq=2: [3:C] │ │
│ │ │ │
│ │ freq=4: [1:A] │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
When evicting: Find minFreq bucket (freq=1), evict the LRU entry (key 2).
Data Structures Needed:
- keyMap:
HashMap<key, Node>- O(1) key lookup - freqMap:
HashMap<freq, DoublyLinkedList>- O(1) frequency bucket access - minFreq: Track minimum frequency for O(1) eviction
Interactive Visualizer
Watch how the LFU Cache tracks access frequency and evicts based on usage patterns:
Compare with LRU Cache
See how LRU Cache behaves differently - it only tracks recency, not frequency:
Problem: LFU Cache (LC 460)
Note: This is a LeetCode Premium problem. You need a subscription to submit.
Problem: Design and implement a data structure for a Least Frequently Used (LFU) cache.
Implement the LFUCache class:
LFUCache(int capacity)- Initialize with positive capacityint get(int key)- Return value if exists (increases frequency), else return -1void put(int key, int value)- Update or insert. If capacity exceeded, evict the least frequently used item. If there’s a tie, evict the least recently used among them.
Both operations must run in O(1) average time.
Algorithm Steps
- Initialize: Create keyMap, freqMap, and set minFreq = 0
- get(key):
- If key not in keyMap → return -1
- Else → increase frequency, move to new freq bucket, return value
- put(key, value):
- If key exists → update value, increase frequency
- If key doesn’t exist:
- If at capacity → evict from minFreq bucket (LRU entry)
- Create new node with freq=1 → insert to freq=1 bucket, set minFreq=1
Code Template
Worked Example
Let’s trace through: LFUCache(2), then put(1,10), put(2,20), get(1), put(3,30), get(2), get(3)
Step 1: LFUCache(2)
freqMap: empty | keyMap: {} | minFreq: 0
Step 2: put(1, 10)
freqMap:
freq=1: [1:10]
↑
minFreq: 1
Step 3: put(2, 20)
freqMap:
freq=1: [2:20] ⟷ [1:10] ← both have freq=1
↑ ↑
MRU LRU
minFreq: 1
Step 4: get(1) → returns 10
freqMap:
freq=1: [2:20] ← only key 2 now
freq=2: [1:10] ← key 1 moved up!
minFreq: 1 (still has entries)
Step 5: put(3, 30) → Must evict! Who goes?
Key 1: freq=2
Key 2: freq=1 ← LOWEST! Evict this one!
freqMap:
freq=1: [3:30] ← new entry at freq=1
freq=2: [1:10]
minFreq: 1
❌ key 2 evicted (it was least frequently used)
Step 6: get(2) → returns -1 (was evicted!)
Step 7: get(3) → returns 30
freqMap:
freq=2: [3:30] ⟷ [1:10] ← key 3 moved to freq=2
minFreq: 2 (freq=1 bucket now empty)
LFU vs LRU: Detailed Comparison
| Aspect | LRU (Least Recently Used) | LFU (Least Frequently Used) |
|---|---|---|
| Eviction Basis | Time since last access | Access count |
| Complexity | O(1) with 1 HashMap + 1 DLL | O(1) with 2 HashMaps + freq DLLs |
| Space Overhead | Lower | Higher (freq tracking) |
| Scan Resistance | Poor (full scan evicts everything) | Good (frequently used items survive) |
| One-time Access | Evicts older items | Keeps older but popular items |
| Burst Access | Favors recently accessed | May evict recently accessed if infrequent |
| Implementation | Simpler | More complex |
When to Use LRU
- Temporal locality is important (recently accessed = likely accessed again)
- Simpler implementation needed
- Memory overhead is a concern
- Working set changes frequently
When to Use LFU
- Frequency matters more than recency
- Popular items should stay cached even if not accessed recently
- Working set is relatively stable
- Scan resistance is important
Time & Space Complexity
| Operation | Time | Space |
|---|---|---|
get(key) | O(1) | - |
put(key, value) | O(1) | - |
| Overall Space | - | O(capacity) |
Why O(1)?
keyMapprovides O(1) key lookupfreqMapprovides O(1) frequency bucket access- Each doubly linked list provides O(1) insertion/removal with node pointer
minFreqprovides O(1) access to eviction candidate bucket
Critical Points & Common Mistakes
1. Updating minFreq Correctly
When does minFreq change?
- When inserting a new key:
minFreq = 1(always) - When updating frequency: Check if old frequency bucket becomes empty
private void updateFreq(Node node) {
int oldFreq = node.freq;
DLinkedList oldList = freqMap.get(oldFreq);
oldList.remove(node);
// CRITICAL: Only update minFreq if:
// 1. The node was in the minFreq bucket
// 2. AND that bucket is now empty
if (oldFreq == minFreq && oldList.size == 0) {
minFreq++; // Next minFreq is guaranteed to be oldFreq + 1
}
node.freq++;
freqMap.computeIfAbsent(node.freq, k -> new DLinkedList()).addFirst(node);
}
Common Mistake: Forgetting to check if the bucket is empty before incrementing minFreq.
2. Tie-Breaking with LRU
When multiple keys have the same minimum frequency, evict the least recently used among them.
freq=1: [C] ⟷ [B] ⟷ [A]
MRU LRU
↑ ↑
(newest) (oldest) ← Evict this one!
3. New Entry Always Has freq=1
// CORRECT
Node node = new Node(key, value);
node.freq = 1; // Always starts at 1
freqMap.get(1).addFirst(node);
minFreq = 1; // New entry is always the new minimum
// WRONG
minFreq = Math.min(minFreq, 1); // Overcomplicated, just set to 1
4. Don’t Delete Empty Frequency Buckets (Optional)
You can leave empty buckets in freqMap - they don’t affect correctness. But if memory is tight:
if (oldList.size == 0) {
freqMap.remove(oldFreq); // Optional cleanup
}
Interview Follow-up Questions
1. How to Handle Decaying Frequency?
Problem: Old items accumulate high frequency and never get evicted.
Solution: Time-based decay
- Periodically halve all frequencies
- Or use a sliding window for frequency counting
2. Thread Safety
Use ConcurrentHashMap and fine-grained locking on frequency buckets:
class ThreadSafeLFUCache {
private final ConcurrentHashMap<Integer, Node> keyMap;
private final ConcurrentHashMap<Integer, DLinkedList> freqMap;
private final ReentrantLock lock = new ReentrantLock();
public int get(int key) {
lock.lock();
try {
// ... LFU logic
} finally {
lock.unlock();
}
}
}
3. What About O(1) with Simpler Structure?
Alternative: Use a HashMap + min-heap, but that gives O(log n) for put/get.
The O(1) solution requires the frequency bucket approach with doubly linked lists.
Related Caching Strategies
| Strategy | Eviction Policy | Use Case |
|---|---|---|
| LRU | Least Recently Used | General purpose, good temporal locality |
| LFU | Least Frequently Used | Stable working set, frequency matters |
| LRU-K | Not used in last K accesses | Balance between LRU and LFU |
| ARC | Adaptive (LRU + LFU) | Self-tuning, best of both worlds |
| FIFO | First In, First Out | Simple, predictable eviction order |
Related Problems
| Problem | Description | Link |
|---|---|---|
| LRU Cache | Evict least recently used | LC 146 |
| All O’one Data Structure | Track min/max frequency keys | LC 432 |
| Design a Leaderboard | Frequency-based ranking | LC 1244 🔒 |
| Maximum Frequency Stack | Stack with frequency priority | LC 895 |
Summary
Key Takeaways
- Two HashMaps:
keyMapfor O(1) lookup,freqMapfor O(1) frequency access - Doubly Linked List per Frequency: For O(1) LRU within each frequency
- minFreq Tracking: Essential for O(1) eviction
- Tie-Breaking: Same frequency → evict LRU among them
- New entries: Always start with freq=1, always update minFreq=1
Pattern to Memorize
LFU Cache = HashMap<Key, Node> + HashMap<Freq, DLinkedList> + minFreq
get(key):
1. keyMap lookup → O(1)
2. Remove from old freq bucket, add to new freq bucket → O(1)
3. Update minFreq if old bucket now empty
put(key, value):
1. If exists: update value + update frequency
2. If new:
- If full: evict from freqMap[minFreq].tail (LRU)
- Insert to freqMap[1], set minFreq = 1
LFU vs LRU Decision Guide
| Question | LRU | LFU |
|---|---|---|
| Do recent accesses predict future accesses? | ✅ | ❌ |
| Do popular items need protection from scans? | ❌ | ✅ |
| Is implementation simplicity important? | ✅ | ❌ |
| Does access frequency matter? | ❌ | ✅ |
For most general-purpose caches, LRU is simpler and often good enough. Use LFU when you need scan resistance or frequency-based eviction is critical for your use case.