You've probably been told that you can convert any recursive function into an iterative loop just by using an explicit stack.

Tail-recursive functions

Whenever you get an example, it's usually something that's trivial, because the function was tail-recursive, so you don't even need a stack:

    def fact(n, value=1):
        if n<2:
            return 1
        return fact(n-1, value*n)
That maps directly to:
    def fact(n):
        value = 1
        while True:
            if n<2:
                return value
            n, value = n-1, value*n
You can merge the while and if, at which point you realize you have a for in disguise. and then you can realize that, multiplication being commutative and associative and all that, you might as well turn the loop around, so you get:
    def fact(n):
        value = 1
        for i in range(2, n+1):
            value *= i
        return value
(You might also notice that this is just functools.reduce(operator.mul, range(2, n+1), 1), but if you're the kind of person who notices that and finds it more readable, you probably also rewrote the tail-recursive version into a recursive fold/reduce function, and all you had to do was find an iterative reduce function to replace it with.)

Continuation stacks

Your real program isn't tail-recursive. Either you didn't bother making that transformation because your language doesn't do tail call elimination (Python doesn't), or the whole reason you're switching from recursive to iterative in the first place is that you couldn't figure out a clean way to write your code tail-recursively.

So, now you need a stack. But what goes on the stack?

The most general answer to that is that you want continuations on the stack: what the result of the function does with the result of each recursive call. That may sound scary, and in general it is… but in most practical cases, it's not.

Let's say you have this:
    def fact(n):
        if n < 2:
            return 1
        return n * fact(n-1)
What's the continuation? It's "return n * _", where that _ is the return value of the recursive call. You can write a function with one argument that does that. (What about the base case? Well, a function of 1 argument can always ignore its argument). So, instead of storing continuations, you can just store functions:
    def fact(n):
        stack = []
        while True:
            if n < 2:
                stack.append(lambda _: 1)
                break
            stack.append(lambda _, n=n: _ * n)
        value = None
        for frame in reversed(stack):
            value = frame(value)
        return value
(Notice the n=n in the second lambda. See the Python FAQ for an explanation, but basically it's to make sure we're building a function that uses the current value of n, instead of one that closes over the variable n.)

This is undeniably kind of ugly, but we can start simplifying it. If only the base case and the recursive call had the same form, we could factor out the whole function, right? Well, if we start with 1 instead of None, the base case can return _ * 1. And then, yes, we can factor out the whole function, and just store each n value on the stack:
    def fact(n):
        stack = []
        while True:
            if n < 2:
                stack.append(1)
                break
            stack.append(n)
        value = 1
        for frame in reversed(stack):
            value = value * frame
        return value
But once we're doing this, why even store the 1? And, once you take that out, the while loop is obviously a for loop over a range in disguise:
    def fact(n):
        stack = []
        for i in range(n, 1, -1):
            stack.append(i)
        value = 1
        for frame in reversed(stack):
            value *= frame
        return value
Now stack is obviously just list(range(n, 1, -1)), so we can skip the loop entirely:
    def fact(n):
        stack = list(range(n, 1, -1))
        value = 1
        for frame in reversed(stack):
            value *= frame
        return value
Now, we don't really care that it's a list, as long as it's something we can pass to reversed. In fact, why even call reversed on a backward range when we can just write a forward range directly?
    def fact(n):
        value = 1
        for frame in range(2, n+1):
            value *= frame
        return value
Not surprisingly, we ended up with the same function we got from the tail recursive starting point.

Interpreter stacks

Is there a way to do this in general without stacking up continuations? Of course there is. After all, an interpreter doesn't have to call itself recursively just to execute your recursive call (even if CPython does, Stackless doesn't…), and your CPU certainly isn't calling itself recursively to execute compiled recursive code.

Here's what a function call does: The caller pushes the "program counter" and the arguments onto the stack, then it jumps to the callee. The callee pops, computes the result, pushes the result, jumps to the popped counter. The only issue is that the callee can have locals that shadow the caller's; you can handle that by just pushing all of your locals (not the post-transformation locals, which include the stack itself, just the set used by the recursive function) as well.

This sounds like it might be hard to write without a goto, but you can always simulate goto with a loop around a state machine. So:

    State = enum.Enum('State', 'start cont done')

    def fact(n):
        state = State.start
        stack = [(State.done, n)]
        while True:
            if state == State.start:
                pc, n = stack.pop()
                if n < 2:
                    # return 1
                    stack.append(1)
                    state = pc
                    continue
                # stash locals
                stack.append((pc, n))
                # call recursively
                stack.append((State.cont, n-1))
                state = State.start
                continue
            elif state == State.cont:
                # get return value
                retval = stack.pop()
                # restore locals
                pc, n = stack.pop()
                # return n * fact(n-1)
                stack.append(n * retval)
                state = pc
                continue
            elif state == State.done:
                retval = stack.pop()
                return retval
Beautiful, right? Well, we can find ways to simplify this. Let's start by using one of the tricks native-code compilers use: in addition to the stack, you've also got registers. As long as you've got enough registers, you can pass arguments in registers instead of on the stack, and you can return values in registers too. And we can just use local variables for the registers. So:
    def fact(n):
        state = State.start
        pc = State.done
        stack = []
        while True:
            if state == State.start:
                if n < 2:
                    # return 1
                    retval = 1
                    state = pc
                    continue
                stack.append((pc, n))
                pc, n, state = State.cont, n-1, State.start
            elif state == State.cont:
                state, n = stack.pop()
                retval = n * retval
            elif state == State.done:
                return retval
1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.