What's the difference between a list comprehension, and calling list on a generator expression? (By the way, everything below applies to set comprehensions and, with trivial tweaks, dict comprehensions, but I'm only going to talk about lists for simplicity.)

To be concrete, what's the difference between:

    [x for x in it]
    list(x for x in it)

As it turns out, they behave exactly the same way except for two differences. (In Python 3.0-3.4; there were more differences in 2.x.)

StopIteration

First, if you raise StopIteration anywhere inside (in the main loop clause, any additional clauses, or the expression), the former will pass the exception straight through, while the latter will eat it and end early. So, for example:

    >>> def stop(): raise StopIteration
    >>> a = [x for x in (1, 0) if x or stop()]
    StopIteration:
    >>> a
    NameError: name 'a' is not defined
    >>> a = list(x for x in (1, 0) if x or stop())
    >>> a
    [1]

It would be nice to be able to make them behave _exactly_ the same way. That would simplify the language--no more need to define two very similar concepts independently; you can just define comprehensions as if calling list on the equivalent generator expression.

Performance

The list(genexpr) version is up to 40% slower than the comprehension. That isn't acceptable. The benefit of simplifying the language (and, as a minor side benefit, being able to StopIteration a listcomp) isn't worth that cost.
So, is there a way to optimize that?

Implementation

Before we try to optimize the bytecode, we have to know what it looks like.
Let's take a trivial comprehension, [i for i in x] (where x is a local). This is obviously a silly thing to write, but not having anything extra to get in the way will make the bytecode easier to read.

The comprehension looks like this:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1

And a equivalent genexpr is pretty much identical:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1

That's not very interesting--it just gets some magic bytecode from somewhere, makes a function out of it, calls it with iter(x), and returns the value! What does that magic function look like? For the comprehension:

    BUILD_LIST
    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

(Actually, a real listcomp will STORE_VALUE i and LOAD_VALUE i before the LIST_APPEND, because Python has no way of knowing that the expression on i happens to always have the same value as i, but I stripped that for simplicity.)

And for the genexpr:

    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    YIELD_VALUE 1
    POP_TOP
    JUMP_ABSOLUTE :loop
    :endloop
    LOAD_CONST None
    RETURN_VALUE

So, the only differences are that there's no BUILD_LIST, it YIELDs and POPs each value instead of LIST_APPENDing it, and it returns None instead of the list.

As you can guess, calling list on the genexpr looks like this:

    LOAD_NAME list
    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1
    CALL_FUNCTION 1

In other words, it's just list(genexpr-function(iter(x)))

Handling StopIteration in listcomp-code

If the only difference between [listcomp] and list(genexpr) is that the latter handles StopIteration, there's a pretty obvious way to make them act the same without the 40% performance hit: just make listcomp handle StopIteration.

In pseudo-Python, the current listcomp-code looks like this:

    a = []
    for i in x:
        a.append(i)
    return a

And we want this:

    a = []
    try:
        for i in x:
            a.append(i)
    except StopIteration:
        pass
    return a

Let's translate that into bytecode:

    BUILD_LIST
    SETUP_EXCEPT :except
    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :except
    DUP_TOP
    LOAD_GLOBAL StopIteration
    COMPARE_OP exception_match
    POP_JUMP_IF_FALSE :raise
    POP_TOP
    POP_TOP
    POP_TOP
    POP_EXCEPT
    JUMP_FORWARD :endloop
    :raise
    END_FINALLY
    :endloop
    RETURN_VALUE

I cheated a bit by merging endloop and endexcept into one. Normally, Python would compile this so the FOR_ITER jumped to a JUMP_FORWARD that jumped to the actual ending, but when we're handing-coding (or writing new special-case compiler code) there's no reason to do that.

Of course in real life you wouldn't want to LOAD_GLOBAL StopIteration, but this gets the idea across.

So, how much slower is this?

Well, there's no per-iteration cost, because the code inside the loop is the same as ever.

There is a tiny bit of constant overhead from the SETUP_EXCEPT. It's around 12ns on a machine where a simple listcomp takes around 500ns + 100ns/iteration. So, we're talking under 1% overhead for most cases. There's also probably some cost from loading a larger function and jumping a bit farther, although I haven't been able to measure it.

If you actually raise StopIteration, or course, that slows things down by maybe 250ns, but since that didn't work before, you can't complain that it's slower.

If you raise anything else, it also adds a similar amount of time (a bit harder to measure), but I don't think anyone cares about the performance of list comprehensions that fail by raising.

Meanwhile, the required changes to CPython are all in one function in compile.c, and not very complicated.

If we were willing to add a new opcode, we could add a SETUP_STOP_ITERATION, which jumps on StopIteration and ignores any other exception. Then we only need one new line in the code:

    BUILD_LIST
    LOAD_FAST .0
    SETUP_STOP_ITERATION
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

This obviously make the compiler simpler, but it does so at the cost of a new bytecode, which really just moves the complexity somewhere else--and somewhere less desirable. (Adding a new bytecode is a bigger change than just changing the compiler to compile different bytecode.) And it wouldn't be any faster for the typical fast path (no exceptions). It might be a little faster when exceptions are raised, and it does save a few bytes in the compiled bytecode, but I don't think that's worth it.

Optimizing list(genexpr)

If we're going to simplify the language, wouldn't it be nice to also simplify the implementation? Can't we get rid of the code to build magic listcomp functions, and maybe even the special BUILD_LIST and LIST_APPEND opcodes?

We need to know why it's 40% slower before we can fix it. It's not because of the call to list. In fact, we can inline the list building:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1
    BUILD_LIST
    :loop
    FOR_ITER :endloop
    LIST_APPEND 2
    JUMP_ABSOLUTE :loop
    :endloop

This shaves off a few nanoseconds of constant cost and a few nanoseconds per iteration, but doesn't make much of a dent in the 40%.

The real cost here is we have to go back and forth between the FOR_ITER and the inner function's YIELD once per iteration. In other words, we're doing a generator suspend and resume for each iteration.

So, what we need is some new FAST_FOR_ITER and FAST_YIELD that can trade off within the same function. And we'll also need a FAST_RETURN, of course.

So, FAST_FOR_ITER has to jump to the inlined generator. It has nowhere to put an extra operand, but that's fine; since it doesn't ever directly run its own outer body, we can just put the inlined generator right after it. Next, FAST_YIELD has to jump to the outer loop body. Then, the outer loop body has to jump to the line after FAST_YIELD, instead of all the way back to the FAST_FOR_ITER. The inner loop has to jump to the outer FAST_FOR_ITER instead of its own FOR_ITER:

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    FAST_YIELD :outerbody
    :innercontinue
    POP_TOP
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    LOAD_CONST None
    FAST_RETURN :outerloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

What exactly is FAST_FOR_ITER doing here? It's not really iterating anything; it just jumps to :outerendloop if you've raised StopIteration or called FAST_RETURN, and falls through to the next line otherwise.

I'm not sure how it can even know whether it's gotten here as a result of a FAST_RETURN, or a FAST_YIELD that's been processed inline... but as it turns out, we can just optimize out the FAST_RETURN, because all we're ever going to do is ignore what we got and jump to :outerendloop. So it doesn't really matter how we'd implement it; let's just replace it with a JUMP_FORWARD to :outerendloop.

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    FAST_YIELD :outerbody
    :innercontinue
    POP_TOP
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    JUMP_FORWARD :outerendloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

But now, what exactly is FAST_YIELD doing? Basically it's just doing a DUP_TOP and a JUMP_RELATIVE. And we don't need the DUP_TOP, because we don't actually need the value after we jump back here--all we do is POP_TOP it. So:

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    JUMP_FORWARD :outerbody
    :innercontinue
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    JUMP_FORWARD :outerendloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

Now we've got all these lines that just jump to other jumps, so we can optimize them all out:

    BUILD_LIST
    LOAD_FAST .0
    :loop
    FAST_FOR_ITER :endloop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :endloop
    :outerendloop
    RETURN_VALUE

And now, this is identical to the original listcomp code, except for that outer FAST_FOR_ITER opcode.

And what exactly is it doing? Basically, if you've raised StopIteration it jumps to :outerendloop. But there's no need to ever jump back to it for it to serve that purpose; it can work just like SETUP_EXCEPT. In fact, it's exactly the same as the SETUP_STOP_ITERATION above. So, let's replace it, and move the jump:

    BUILD_LIST
    LOAD_FAST .0
    SETUP_STOP_ITERATION :endloop
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

And that's exactly the same code we had for adding StopIteration handling to [listcomp].
So, yes, you can inline and optimize list(genexpr), but the result is exactly the same as adding StopIteration handling to [listcomp].
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.