Least Recently Used (LRU) Cache, by LeetCode
I've solved an LRU from DailyCodingProblem before, but finally stumbled on one by LeetCode (#418 in my solved list), here it is: https://leetcode.com/problems/lru-cache/
Same approach as I used before:
Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations:
get
and put
.get(key)
- Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.put(key, value)
- Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
The cache is initialized with a positive capacity.
Follow up:
Could you do both operations in O(1) time complexity?
Could you do both operations in O(1) time complexity?
Example:
LRUCache cache = new LRUCache( 2 /* capacity */ ); cache.put(1, 1); cache.put(2, 2); cache.get(1); // returns 1 cache.put(3, 3); // evicts key 2 cache.get(2); // returns -1 (not found) cache.put(4, 4); // evicts key 1 cache.get(1); // returns -1 (not found) cache.get(3); // returns 3 cache.get(4); // returns 4
Same approach as I used before:
- Two data structures: a LIST of keys to keep track of the temporal aspect ("least"), and a HASHTABLE for quick access ("cache")
- On GET: check for presence in the HASHTABLE. If -ve, return -1. If +ve, move the element up to the front in the LIST and return the value in the HASHTABLE
- On PUT: if in the HASHTABLE, update it in the LIST accordingly and move it to the front. If not, check the capacity, if on the limit, remove the last from the LIST. Add the new element to the front of the LIST and add it to the HASHTABLE.
GET is O(1), PUT is O(1). Code is below, cheers, ACC.
public class LRUCache { Hashtable htIndex = null; LinkedListcache = null; int capacity = 0; public LRUCache(int capacity) { this.capacity = capacity; htIndex = new Hashtable(); cache = new LinkedList (); } public int Get(int key) { if (!htIndex.ContainsKey(key)) return -1; int retVal = (int)htIndex[key]; cache.Remove(key); cache.AddFirst(key); return retVal; } public void Put(int key, int value) { if (htIndex.ContainsKey(key)) { cache.Remove(key); cache.AddFirst(key); htIndex[key] = value; } else { if (cache.Count == capacity) { int last = cache.Last.Value; htIndex.Remove(last); cache.RemoveLast(); } cache.AddFirst(key); htIndex.Add(key, value); } //PrintCache(); } private void PrintCache() { foreach (int k in cache) { Console.Write("({0},{1}) => ", k, (int)htIndex[k]); } Console.WriteLine(); } }
love this problem! Unfortunately your implementation has linear complexity for both put and get because of cache.Remove(key) statements - a linear scan is required to do this. In C++ I store pointers to list nodes, so that removal can be done in constant time:
ReplyDeleteclass LRUCache {
private:
using KV = pair;
list values;
unordered_map::iterator> table;
int capacity;
public:
LRUCache(int capacity): capacity(capacity) {}
int get(int key) {
auto found = table.find(key);
if (found == table.end()) return -1;
int value = found->second->second;
if (found->second != values.begin()) {
values.erase(found->second);
values.emplace_front(key, value);
found->second = values.begin();
}
return found->second->second;
}
void put(int key, int value) {
auto found = table.find(key);
if (found != table.end()) {
if (found->second->second != value) found->second->second = value;
if (found->second != values.begin()) {
values.erase(found->second);
values.emplace_front(key, value);
found->second = values.begin();
}
return;
}
if (table.size() == capacity) {
int key_to_evict = values.back().first;
table.erase(key_to_evict);
values.pop_back();
}
values.emplace_front(key, value);
table[key] = values.begin();
}
};