In this post, I break down how to implement a Least Recently Used cache using a HashMap and a doubly linked list.
Problem Breakdown
In this leetcode problem, we are asked to implement a data structure that follows the constraints of a Least Recently Used (LRU) cache.
- We are required to implement the following methods with these requirements:
-
LRUCache(int capacity)
Initialize the LRU cache with positive sizecapacity
. -
int get(int key)
Return the value of thekey
if the key exists, otherwise return1
. -
void put(int key, int value)
Update the value of thekey
if thekey
exists. Otherwise, add thekey-value
pair to the cache. If the number of keys exceeds thecapacity
from this operation, evict the least recently used key.
-
- It is not made very clear in the problem description, but the premise is that whenever we call
get()
orput()
on akey
then this counts as using that key, and so it will be moved to the most recently used end of our cache (previous totail
).
- We are also required to make the
get()
andput()
functions run in O(1) time.
- This is a fundamental design problem that simulates a pattern that is actually used very often in real-world systems.
- A classic real-world example of an LRU cache is found in Redis, a popular in-memory key-value store used for caching and high-speed data access.
- Redis allows developers to set a maximum memory limit for the cache, and once this limit is reached, it automatically evicts keys using an LRU (Least Recently Used) policy.
- This means that:
- Frequently accessed keys are kept in memory.
- Least recently used keys are discarded first to make room for new entries.
- The system stays memory-efficient while still delivering fast access for the frequently used data.
- Our LRU cache solution mimics this behaviour on a smaller scale: using a
HashMap
for quick lookups and a doubly linked list to track usage order.
Provided Example
Input ["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]] Output [null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2); lRUCache.put(1, 1); // cache is {1=1} lRUCache.put(2, 2); // cache is {1=1, 2=2} lRUCache.get(1); // return 1 lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3} lRUCache.get(2); // returns -1 (not found) lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3} lRUCache.get(1); // return -1 (not found) lRUCache.get(3); // return 3 lRUCache.get(4); // return 4
- Hopefully now you have an idea of what we are trying to solve in this problem. Lets now get into the solution.
Solution Breakdown
Overview
- I will try to explain the solution step by step with small code snippets and visuals along the way. The full code is found down below.
- To design an efficient LRU cache that optimizes memory as well as performance, we will do the following:
- Use a
HashMap
for storing unique keys referencing their respective Nodes. - A static
Node
class will be used to encapsulate an entry in our cache. - A doubly linked list will be used for efficient insertions and removals of nodes.
- The head of the list will always point to the least recently used entry in our cache.
- The tail of the list will have a previous pointer that points to the most recently used entry in our cache. Even though this problem does not require retrieval of the most recently used element, the tail node just avoids any null-pointer edge cases by keeping our list connected from both sides.
- Maintain an efficient O(1) time complexity for all operations.
- Use a
- In a large-scale production system, we might not implement the LRU cache using raw linked list nodes. Instead, we’d often store actual domain-specific objects in the cache and maintain a separate variable which stores the least recently used key so we can easily access it from the map.
- For example, we could have
UserSession
,ProductInfo
, orQueryResult
, and manage them via a Map.
Data Structure Design
-
This is the given class we must implement:
class LRUCache { public LRUCache(int capacity) { } public int get(int key) { } public void put(int key, int value) { } }
-
We begin by defining our static
Node
class which will represent the entries in our cache:
static class Node { int key, val; Node prev, next; Node(int key, int val) { this.key = key; this.val = val; } }
- Each Node stores a
key
and aval
as well as references to the previous and next nodes in the linked list (prev
andnext
). - The
HashMap
we use will map each key in our cache to a Node.
- Each Node stores a
-
The
LRUCache
class maintains the following:
private int capacity; private Map<Integer, Node> cache; private Node head, tail; public LRUCache(int capacity) { this.capacity = capacity; this.cache = new HashMap<>(); this.head = new Node(0, 0); // dummy node this.tail = new Node(0, 0); // dummy node // connect the head and tail this.head.next = this.tail; this.tail.prev = this.head; }
-
capacity
is the maximum capacity that our cache can hold. -
cache
is the map used to store the key-value pairs in our cache. The keys are of typeInteger
and the values are of typeNode
. -
head
represents the left-most node in our doubly linked list. It will always be pointing to the least recently used element in our cache. -
prev
represents the right-most node in our doubly linked list. Its previous pointer will always point to the most recently used element in our cache. - We make sure to connect
head
andtail
so that all elements to be added will go in between those two nodes. We do this by assigninghead.next
to point totail
andtail.prev
tohead
.
-
- This is how our cache looks like initially:
Insertion and Removal Logic
- Before we get into the implementations of the two main functions, we first need to discuss the two helper methods we will use throughout our code:
insert()
andremove()
.
-
Lets start with
insert()
:
// inserts a node to the tail of the list private void insert(Node node) { node.next = tail; node.prev = tail.prev; tail.prev.next = node; tail.prev = node; }
- This function will be used to insert a node to the right-end of the linked list, just before the tail.
- We insert the node at the end of the list, just before the
tail
dummy node, by performing the following steps in order:- Set the
next
pointer of the new node to point to tail. - Set its
prev
pointer to point to the current last node (i.e., the node currently referenced bytail.prev
). - At this point, the new node’s internal links (
prev
andnext
) are correctly set. - Now, update the
next
pointer of the previous last node (tail.prev.next
) to point to the new node. - Finally, update
tail.prev
to point to the new node, completing the insertion.
- Set the
- Below is an illustration of this operation:
- We can see how the new node is inserted between
head
andtail
. If we keep calling theinsert()
method, new nodes will keep being inserted to the right end of the list.
-
Now lets move on to
remove()
:
// removes node from anywhere in the list private void remove(Node node) { node.prev.next = node.next; node.next.prev = node.prev; }
- This method removes a given node from any position in the doubly linked list
- It first updates the
next
pointer of the node’s previous node (node.prev.next
) to point tonode.next
. - Then, it updates the
prev
pointer of the node’s next node (node.next.prev
) to point tonode.prev
. - These two pointer adjustments effectively unlink the node from the list, eliminating all references to it
- After this, the node becomes unreachable (assuming no external references), and can be garbage-collected.
- Note: The order of these two operations doesn’t matter, since they independently update the neighboring nodes.
- This is a visual the illustrates this removal:
Method Implementation
-
Now that we understood the helper functions to be used, lets look at the
get()
method:
public int get(int key) { if (!cache.containsKey(key)) return -1; Node node = cache.get(key); // remove node from its current place in the list remove(node); // move node to the tail of the list (most recently used) insert(node); return node.val; }
- This functions checks whether a key exists in our cache or not.
- If the key does not exist, it returns -1.
- If the key exists, then we do the following:
- We remove the node referenced by
key
from its current position in the list. - We then insert the node to the tail-end of the list so that it is now the most recently used node.
- We then return the
val
property of the node.
- We remove the node referenced by
-
Now onto the
put()
method:
public void put(int key, int value) { if (cache.containsKey(key)) { // key already exists Node node = cache.get(key); // remove node from its current position in the list remove(node); } Node newNode = new Node(key, value); cache.put(key, newNode); insert(newNode); // evict the lru node if cache exceeds capacity if (cache.size() > capacity) { Node lruNode = head.next; int lruKey = lruNode.key; remove(lruNode); // remove lru node from the list cache.remove(lruKey); // remove reference to lru node from map } }
- The
put()
method inserts a new key-value pair into the cache, or updates the existing value if the key already exists. - It first checks whether the key is already present in the cache. If it is, we retrieve the corresponding node and remove it from its current position in the linked list.
- Regardless of whether the key existed or not, we then create a new node with the provided
key
andvalue
. - We update the cache map so that this
key
now points to the new node - The new node is inserted at the tail end of the linked list, representing the most recently used key.
- Since this operation could increase the size of the cache, we then check whether we've exceeded the allowed
capacity
. - If we have, we identify the least recently used node, which is always located at
head.next
, and remove it from both the linked list and the cache map.
- The
Example Walkthrough
-
Lets now run through an example showcasing how our LRU cache will work:
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1);
lRUCache.put(2, 2);
lRUCache.get(1); // returns 1 and 1=1 becomes the mru key
lRUCache.put(3, 3); // 2=2 is the lru key and is removed due to capacity constraint
lRUCache.get(2); // returns -1
lRUCache.put(4, 4); // 1=1 is the lru key and is removed due to capacity constraint
lRUCache.get(1); // returns -1
lRUCache.get(3); // returns 3 and 3=3 becomes the mru key
lRUCache.get(4); // returns 4 and 4=4 becomes the mru key
Full Code
class LRUCache { static class Node { int key, val; Node prev, next; Node(int key, int val) { this.key = key; this.val = val; } } private int capacity; private Map<Integer, Node> cache; private Node head, tail; public LRUCache(int capacity) { this.capacity = capacity; this.cache = new HashMap<>(); this.head = new Node(0, 0); // dummy node this.tail = new Node(0, 0); // dummy node // connect the head and tail this.head.next = this.tail; this.tail.prev = this.head; } public int get(int key) { if (!cache.containsKey(key)) return -1; Node node = cache.get(key); // remove node from its current place in the list remove(node); // move node to the tail of the list (most recently used) insert(node); return node.val; } public void put(int key, int value) { if (cache.containsKey(key)) { // key already exists Node node = cache.get(key); // remove node from its current position in the list remove(node); } Node newNode = new Node(key, value); cache.put(key, newNode); insert(newNode); // inserted at tail end (most recently used) // evict the lru node if cache exceeds capacity if (cache.size() > capacity) { Node lruNode = head.next; int lruKey = lruNode.key; remove(lruNode); cache.remove(lruKey); } } // inserts a node to the tail of the list private void insert(Node node) { node.next = tail; node.prev = tail.prev; tail.prev.next = node; tail.prev = node; } // removes node from anywhere in the list private void remove(Node node) { node.prev.next = node.next; node.next.prev = node.prev; } } /** * Your LRUCache object will be instantiated and called as such: * LRUCache obj = new LRUCache(capacity); * int param_1 = obj.get(key); * obj.put(key,value); */
Complexity Analysis
Time Complexity
Operation | Time Complexity | Explanation |
---|---|---|
get(key) | O(1) | HashMap lookup + list node removal and re-insertion take constant time |
put(key, val) | O(1) | HashMap update + doubly linked list insert/remove take constant time |
Space Complexity
Component | Space Complexity | Explanation |
---|---|---|
HashMap (cache) | O(capacity) | Stores up to capacity key → node mappings |
Doubly Linked List | O(capacity) | Stores up to capacity nodes with key-value pairs |
Total | O(capacity) | Linear space usage in terms of the max number of keys the cache holds |
If this breakdown helped or you approached it differently, I’d love to hear your thoughts.
Drop a comment, leave a ❤️, or share how you'd extend or optimize the design.
Follow for more deep dives into LeetCode systems problems and backend design patterns.
Top comments (0)