Memoization

You know when you jot down some phone numbers in the little memo near your phone? Well, not sure how many people still do that, but whenever they do it they are doing what Computer Scientists call Memoization: the process of annotating data in a memo for quick reference in the future.

Memoization is the technique primarily used in a field of Algorithms called "Dynamic Programming", commonly known as DP.

DP is hard to grasp. At least for me. It is a different paradigm compared to Recursion, and similarly to recursion, you need to solve hundreds of problems before getting any good at DP.

I'm not good at DP at all.

But I do want to exemplify a simple use of memoization, using a generalization of the second most famous DP problem (the first most famous being Factorial): Fibonacci.

As you may recall, Fibonacci is a simple recursive formula that describes the growth in population for a certain species of rats. It is defined as follows:

Fib(n) = n (for n<=1)
Fib(n) = Fib(n-2) + Fib(n-1) (otherwise)

A generalization of Fibonacci requires us to pass to the function a second parameter, which indicates the number of immediate predecessors terms to add for the calculation of Fib(n). We're going to call that second parameter T (for "Terms to be added"). The generalized Fibonacci function then becomes:

FibGeneralized(n,T) = n (for n<T)
FibGeneralized(n,T) = FibGeneralized(n-T) + FibGeneralized(n-T-1) + ... + FibGeneralized(n-1) (otherwise)

Hence we can see the Fib(n) = FibGeneralized(n,2), which means that Fibonacci requires as to sum the last two immediate predecessor terms.

Now let's suppose that our goal is to find the following (big) number:

FibGeneralized(100, 10) = ?

Hence we want the 100th term in the generalized Fibonacci function where we always add the last ten terms.

If we don't use memoization and simply try the code up above, we can quickly write it:

        static BigInteger SumLastN(int index, int n)
        {
            if (index < n) return index;
            BigInteger sum = 0;
            for (int i = n; i >= 1; i--)
            {
                sum += SumLastN(index - i, n);
            }
            return sum;
        }

And then all we have to do is call SumLastN(100, 10). However, this is an incredibly slow way of doing it since there are multiple repeating computations: F(90, 10), for instance will eventually calculate F(10,10). But F(89, 10) will also eventually pass thru F(10,10). And so on so forth. We end up wasting a ton of time re-computing the same values. I actually took a short video of this code running, check it out:


With some patience, it eventually prints out SumLastN(38, 10). But it won't get anywhere close to our goal of SumLastN(100, 10).

Memoization comes handy here. The insight is the following: anytime that you want to compute SumLastN(p, T), do this:

1) First, check whether you already have that value in your memo - in our case the memo will be a simple hash-table
2) If it is not there, sure, calculate it as before, but right after calculating it add it back to your memo (hash-table)

That should do it. Take a look at the modified code below:

        static BigInteger SumLastNMemoization(int index, int n, Hashtable htMemo)
        {
            if (index < n) return index;
            if (htMemo.ContainsKey(index)) return (BigInteger)htMemo[index];

            BigInteger sum = 0;
            for (int i = n; i >= 1; i--)
            {
                sum += GetAndSetMemo(index - i, n, htMemo, -1);
            }

            return GetAndSetMemo(index, n, htMemo, sum);
        }

        static BigInteger GetAndSetMemo(int index, int n, Hashtable htMemo, BigInteger value)
        {
            if (!htMemo.ContainsKey(index))
            {
                if (value != -1)
                {
                    htMemo.Add(index, value);
                }
                else
                {
                    BigInteger result = SumLastNMemoization(index, n, htMemo);
                    if (!htMemo.ContainsKey(index))
                    {
                        htMemo.Add(index, result);
                    }
                }
            }
            return (BigInteger)htMemo[index];
        }

Here is the video showing that one in action (sorry for the background noise and bad video quality): 


And there you have the answer we were looking for, in a blink of an eye:

FibGeneralized(100, 10) = 52383321763167120645301410147

Cheers,
Marcelo

Comments

  1. Thanks for your explanation. It really helped me understand the concept better.

    ReplyDelete
  2. Thanks for the write-up Marcelo! Since my brain is too small for DP, I almost always go from recurrent relation to recursive solution, then add @lru_cache decorator (if it's python) or explicit dictionary or array (if cache is dense and has limited key range) to cache the results and, only if it's not enough, use the heavy artillery - bottom up solution. Sometimes implementation using memoization is actually faster than bottom-up alternative, since a lot of subproblems don't have to be solved at all, but bottom-up algorithm will have to calculate them anyways. At the same time if algorithm is such that most or all subproblems have to be solved, then bottom-up algorithm is faster due to lower overhead, which is especially noticeable on the micro-level, since there are less wrong CPU branch predictions.

    ReplyDelete

Post a Comment

Popular posts from this blog

Golang vs. C#: performance characteristics (simple case study)

Claude vs ChatGPT: A Coder's Perspective on LLM Performance

ProjectEuler Problem 719 (some hints, but no spoilers)