In the Gut: Reaching a Better Understanding of Recursion

halented

Hal Friday

Posted on February 9, 2024

In the Gut: Reaching a Better Understanding of Recursion

Often times people (me) get swung into the habit of solving recursion style algorithms in these neat little scenarios that you find on popular sites like LeetCode and HackerRank. And it will sometimes come in handy if you run into such a problem during a live coding interview, because the scenarios share a lot of qualities and you can carry over the specific skillset you learned on those sandbox sites into whatever IDE is being used for the interview.

That's got a limited helpfulness, but I'm not as interested in learning a cleansed version of the skill that mostly makes sense in the vacuum of performing interview tricks — I want to learn recursion "in the gut," to the point of it occurring to me naturally for solution building in a real-world setting, on a production-level job.

With that goal in mind, I set out to revisit recursion and develop and understanding of it that was more versatile and less pattern-based. I wanted to record some of my experience with exploring that here in this blog post.

It makes sense to start where you are — in my case, a review of popular problems I've solved with recursion. I'd like to go over each of the following three and capture some conceptual, overarching information from each before moving forward into a more unknown, creative space and apply these concepts.

toThePowerOf

Spoiler alert! Each section will show code solutions. If you'd like to try toThePowerOf yourself before reading, check it out here.

One of the simplest recursion problems which can be used to ground one's understanding of the whole thing, is the toThePowerOf function. Your function receives a base number (b) and power (p). It must return an integer, which is the result of taking b to the power of p.

For me, the important concept which we can carry forward into our creative space from this problem is that recursion is easier to think about in an upside down way. By that I mean, the first thing you need to know about your recursive problem is: what is the end like? At what point does your algorithm know when to stop recurring? That's called the "base case" which I honestly think it's kind of a confusing name for it, because it's the case at the very tippy top of the recursion stack. In my opinion it should be called the peak case. Doesn't quite have the same ring to it, maybe crest case is better.

Anyway.

How can we determine the crest case for an algorithm which returns b to the power of p? The function should continue to recur until the power variable (p) is down to 1. At that point, we know what can start climbing back down the stack which has been built, multiplying each result by the base variable (b). It's difficult to understand this without looking at it, so take a peek at the code:



const toThePowerOf = (b, p) => {
    if (p === 1) {
        return b
    }
    return b * toThePowerOf(b, p - 1);
}



Enter fullscreen mode Exit fullscreen mode

Taking it chunk by chunk, you have the crest case, when the power becomes one. Any number to the power of one is itself, so we just return that. After that, we start heading back down the stack — the return value of the most recent function call is multiplied by b, and so on. Once the stack completes, it will spit out b to the power of p.

To make sure the mental model is complete, let's take a look at this visualization of the function doing its work when given the parameters 2 and 3 as b and p, respectively.

Call stack and description of the flow of calls for the toThePowerOf function

(You can look at this on Google Drawings directly if you would like, from this link.)

As you can see, call #3 is at the top of the call stack, and it ends the recursion and completes first. It is the shortest running function in the bunch. Call #1 is at the bottom of the call stack, and when all is said and done, call #1 will be the one to return the correct answer.

If that's not quite hitting, I recommend popping the function into this visualizer and watching it / stepping through the functions as they run. I will give a warning about that site, it isn't perfect and asking it to visualize denser recursion problems has produced bugs and/or incorrect results for me. But it works well for this powers one.

This type of recursion problem is what I'd like to call linear. It steps up the call stack; it steps back down the call stack. This isn't the common use-case for recursion as seen in the real world, because such problems can also be solved in an iterative way without sacrificing much speed or space. You could just use a while-loop.

A case where recursion is a little more useful might be..

nthFibonacciNumber

Check out the problem here if you'd like to attempt it before seeing any solution.

When visualized, the recursive solution for the nthFibonacciNumber algorithm does not have one single line of calls, so it takes a more branched shape. The call flow steps all the way up the call stack, then on its way back down, it must make separate branches off of the main stack which it completes along the way. Calling nthFibonacciNumber(5) has a result that looks something like this:

visualized tree for nthFibonacciNumber algorithm given the parameter 5

(Image generated on Recursion Tree Visualizer)

While studying the algorithm this go around, the concept that I focused on retaining revolved around the use of returns.

Recursive algorithms all have at least 2 options for returning:

  1. The return that happens during the crest case; the return that ceases any further recursion, which should never change, and
  2. The return that receives the bulk of the work and is sent on down the line, which changes with each completed call

Using these carefully and understanding what is needed from each before beginning to write out a solution will make the actual programming part a lot easier.

So, applying the first concept to the nthFibo problem, our crest case (and first return value) should be the point at which we stop adding recursions to the stack. Since the algorithm's argument is the number of times that we need to perform the fibo operation, then we will count backward from that to the end of the line and our crest case should be 1 and/or 0 -- it just needs to stop before anything negative happens.

As for the second return value, it needs to be the result of the operations to actually find a Fib number: the last two Fib numbers added together.

Here's what the whole thing looks like.



const fib = (n) => {
    if (n <= 1) {
        return n;
    }
    return fib(n - 1) + fib(n - 2);
}


Enter fullscreen mode Exit fullscreen mode

Structurally, you can really see the similarities between this one and the toThePowerOf algorithm. The big difference is that Fib is calling itself twice at the end, instead of just once. This creates the branching pattern that you saw from the visualizer. Aside from the crest case, each instance of the call must resolve two other instances before it can complete its function and produce an actual value. As difficult as it may be to follow the function calls and their resolutions in one's head, the final line of the algorithm winds up looking pretty intuitive once it's on the page.

With that we'll move on to an algorithm which has a lot more branching.

mergeSort

Check out the problem here on GeeksForGeeks.

With this algorithm, the factors are less simplistic and the application is a little more involved. But the two concepts mentioned previously, understanding the crest case and conceptualizing possible return values, are still super useful to start out with.

The third concept I'd like to anchor myself to and use in this solution and others is also a little more involved. This concept is the thoughtful headspace that one should be in when considering implementing recursion: what types patterns signify that recursion could be a useful solution? When is it powerful, and how can you look at a set of circumstances and transform that into something that fits the bill for a recursive function?

With all three problems so far, recursion can be used because one or two operations need to take place over and over again. But for the first two, an argument can be made that in terms of space and time complexity, it doesn't really give you anything to select a recursive solution over an iterative one (except bragging rights).

With mergeSort, a recursive solution makes a bit more sense. The algorithm has a lot of branches to resolve, and it's important that those branches be resolved in order. This type of data movement would be complicated and annoying to program an iterative approach, but it works pretty gracefully in a recursive approach. Take a look at the code and the tree of calls:



const mergeSort = (arr) => {
    if (arr.length < 2) {
        return arr
    }
    const mid = Math.floor(arr.length / 2);
    const leftArr = arr.slice(0, mid);
    const rightArr = arr.slice(mid);

    return merge(mergeSort(leftArr), mergeSort(rightArr));
}

const merge = (left, right) => {
    const sorted = [];

    while (left.length && right.length) {
        if (left[0] <= right[0]) {
            sorted.push(left.shift());
        }
        else {
            sorted.push(right.shift());
        }
    }
    return [...sorted, ...left, ...right]
}



Enter fullscreen mode Exit fullscreen mode

Call tree visualization for mergeSort

Using mergeSort recursively like this still does have limited viability in real world programming, since this solution is not ideal for large data sets like you might see in a production level database, but there are many other circumstances where it can be applied and actually satisfy the requirements in the most elegant way, as long as you have a smaller data set.

To the Real World

With these three items in mind, one can start getting creative in the application space. I spoke with a mentor of mine recently with more than a decade of experience in the field, who told me that recursion is definitely a tool used in his regular engineering life, albeit sort of rarely. An example he gave fit the profile perfectly for the type of use case I had been imagining: he had recently made a solution to dynamically build URLs by recursing into an Angular object and collecting relevant breadcrumbs along the way. This is the ideal situation; the dataset is reasonably small and the flow of the data is relatively complex and follows a branching pattern.

By framing the circumstance and visualizing it before beginning to program anything, one would be able to see recursion as a satisfying solution for situations like building a dynamic URL, and I'm sure many others. This post is mostly just a journal entry for my own adventures in becoming more comfortable with recursion, but I hope it's able to help anyone else who is in the position of wanting to "level up" in this particular category. Please feel free to leave a comment with any questions or thoughts, I'd love to chat about others' experiences!

💖 💪 🙅 🚩
halented
Hal Friday

Posted on February 9, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related