Implement LRU Cache
Least Recently Used (LRU) caching scheme is to discard the least recently used items first when the cache is full and a newly visted item needs to be added to the cache.Design a LRU caching scheme that implement the following interface.
public interface LruPageCache {
/** Set the capacity of the cache. */
public setCapacity(int capacity);
/** Returns the page number and update cache accordingly.
* This time complexity of this method should be O(1).
*/
public int loadPage(int pageNum);
}
Solution
We need a data structure to check whether a page number is in cache in constant time. HashMap with each page number as a key can make it.We also need a data structure to maintain page numbers in cache in the order of their access time. One way to do that is to keep a timestamp field for each record, but we still need to sort them which cannot be done in O(1) time. Alternatively, we can use a linked list to keep all records, and move the newly visited one to the head of the list. To get O(1) time complexity for updating such a linked list, we need a doubly linked list.
Each time when a new page number comes in,
- If it is already in the cache, move the node to the head of the linked list;
- If it is not in the cache, insert it to the head of the linked list and update the current capacity of the cache. If the cache is full, remove the last node of the linked list. (So, we also need a tail pointer. :)
public static class LruCacheImpl implements LruPageCache {
private int capacity = 0;
private int maxCapacity = 10;
private DListNode head = null;
private DListNode tail = null;
private HashMap<Integer, DListNode> map = new HashMap<Integer, DListNode>();
/** {@inheritDoc} */
@Override
public void setMaxCapacity(final int limit) {
if (limit < 1) {
throw InvalidInputException("Max capacity must be positive.");
}
maxCapacity = limit;
}
/** {@inheritDoc} */
@Override
public int loadPage(final int page) {
final DListNode cur;
if (map.containsKey(page)) { // cache hit
cur = map.get(page);
if (cur != head) {
remove(cur);
insertToHead(cur);
}
print();
return cur.val;
}
// cache miss
cur = new DListNode(page);
insertToHead(cur);
map.put(page, cur);
if (capacity == maxCapacity) {
removeTail();
} else {
++capacity;
}
print();
return cur.val;
}
/** Remove the given node from the linked list. */
private void remove(final DListNode cur) {
if (cur.pre != null) cur.pre.next = cur.next;
if (cur.next != null) cur.next.pre = cur.pre;
if (tail == cur) tail = cur.pre;
}
/** Remove the tail of the linked list and return the deleted node. */
private DListNode removeTail() {
map.remove(tail.val);
DListNode last = tail;
tail = tail.pre;
tail.next = null;
if (head == last) head = null;
return last;
}
/** Add the given node to the head of the linked list. */
private void insertToHead(final DListNode cur) {
cur.next = head;
cur.pre = null;
if (head != null) head.pre = cur;
head = cur;
if (tail == null) tail = cur;
}
private void print() {
DListNode cur = head;
System.out.print("head->");
while (cur != null) {
System.out.print(cur.val);
if (cur == tail) System.out.print(" (tail)");
else System.out.print("->");
cur = cur.next;
}
System.out.println("");
}
/** Doubly Linked list */
private class DListNode {
DListNode pre = null;
DListNode next = null;
int val;
DListNode(int v) {
val = v;
}
}
}
This scheme provide O(1) time for loadPage
and uses O(n) spaces where n is the maxCapacity of the cache.Test outputs:
public static void main(String[] args) {
LruCacheImpl cache = new LruCacheImpl();
cache.setMaxCapacity(4);
System.out.println(cache.loadPage(2));
System.out.println(cache.loadPage(3));
System.out.println(cache.loadPage(1));
System.out.println(cache.loadPage(2));
System.out.println(cache.loadPage(4));
System.out.println(cache.loadPage(1));
System.out.println(cache.loadPage(4));
System.out.println(cache.loadPage(5));
System.out.println(cache.loadPage(6));
}
-----
output:
head->2 (tail)
2
head->3->2 (tail)
3
head->1->3->2 (tail)
1
head->2->1->3 (tail)
2
head->4->2->1->3 (tail)
4
head->1->4->2->3 (tail)
1
head->4->1->2->3 (tail)
4
head->5->4->1->2 (tail)
5
head->6->5->4->1 (tail)
6
Note: Java provides a LinkedHashMap class which implements hashmap on a doubly linked list. It is essentially what we have done here except that LinkedHashMap has no capacity limit. So, in real world, if you need an infinite LRU cache, don't reinvent wheel!
shouldn't it be set setCapacity instead of "setMaxCapacity" on the 8th line ? Its just a small error. The approach is very simple and good :)
ReplyDeleteLinkedHashMap provides a the removeEldestEntry(Map.Entry) method which may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map. This can be used to impose the capacity limit. Refer to http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html#removeEldestEntry%28java.util.Map.Entry%29
ReplyDeleteprivate DListNode removeTail() {
ReplyDeletemap.remove(tail.val);
DListNode last = tail;
tail = tail.pre;
tail.next = null;
if (head == last) head = null;
return last;
}
What if tail becomes NULL...then tail.next will segfault...