What's the difference between a list comprehension, and calling list on a generator expression? (By the way, everything below applies to set comprehensions and, with trivial tweaks, dict comprehensions, but I'm only going to talk about lists for simplicity.)

To be concrete, what's the difference between:

    [x for x in it]
    list(x for x in it)

As it turns out, they behave exactly the same way except for two differences. (In Python 3.0-3.4; there were more differences in 2.x.)

StopIteration

First, if you raise StopIteration anywhere inside (in the main loop clause, any additional clauses, or the expression), the former will pass the exception straight through, while the latter will eat it and end early. So, for example:

    >>> def stop(): raise StopIteration
    >>> a = [x for x in (1, 0) if x or stop()]
    StopIteration:
    >>> a
    NameError: name 'a' is not defined
    >>> a = list(x for x in (1, 0) if x or stop())
    >>> a
    [1]

It would be nice to be able to make them behave _exactly_ the same way. That would simplify the language--no more need to define two very similar concepts independently; you can just define comprehensions as if calling list on the equivalent generator expression.

Performance

The list(genexpr) version is up to 40% slower than the comprehension. That isn't acceptable. The benefit of simplifying the language (and, as a minor side benefit, being able to StopIteration a listcomp) isn't worth that cost.
So, is there a way to optimize that?

Implementation

Before we try to optimize the bytecode, we have to know what it looks like.
Let's take a trivial comprehension, [i for i in x] (where x is a local). This is obviously a silly thing to write, but not having anything extra to get in the way will make the bytecode easier to read.

The comprehension looks like this:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1

And a equivalent genexpr is pretty much identical:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1

That's not very interesting--it just gets some magic bytecode from somewhere, makes a function out of it, calls it with iter(x), and returns the value! What does that magic function look like? For the comprehension:

    BUILD_LIST
    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

(Actually, a real listcomp will STORE_VALUE i and LOAD_VALUE i before the LIST_APPEND, because Python has no way of knowing that the expression on i happens to always have the same value as i, but I stripped that for simplicity.)

And for the genexpr:

    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    YIELD_VALUE 1
    POP_TOP
    JUMP_ABSOLUTE :loop
    :endloop
    LOAD_CONST None
    RETURN_VALUE

So, the only differences are that there's no BUILD_LIST, it YIELDs and POPs each value instead of LIST_APPENDing it, and it returns None instead of the list.

As you can guess, calling list on the genexpr looks like this:

    LOAD_NAME list
    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1
    CALL_FUNCTION 1

In other words, it's just list(genexpr-function(iter(x)))

Handling StopIteration in listcomp-code

If the only difference between [listcomp] and list(genexpr) is that the latter handles StopIteration, there's a pretty obvious way to make them act the same without the 40% performance hit: just make listcomp handle StopIteration.

In pseudo-Python, the current listcomp-code looks like this:

    a = []
    for i in x:
        a.append(i)
    return a

And we want this:

    a = []
    try:
        for i in x:
            a.append(i)
    except StopIteration:
        pass
    return a

Let's translate that into bytecode:

    BUILD_LIST
    SETUP_EXCEPT :except
    LOAD_FAST .0
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :except
    DUP_TOP
    LOAD_GLOBAL StopIteration
    COMPARE_OP exception_match
    POP_JUMP_IF_FALSE :raise
    POP_TOP
    POP_TOP
    POP_TOP
    POP_EXCEPT
    JUMP_FORWARD :endloop
    :raise
    END_FINALLY
    :endloop
    RETURN_VALUE

I cheated a bit by merging endloop and endexcept into one. Normally, Python would compile this so the FOR_ITER jumped to a JUMP_FORWARD that jumped to the actual ending, but when we're handing-coding (or writing new special-case compiler code) there's no reason to do that.

Of course in real life you wouldn't want to LOAD_GLOBAL StopIteration, but this gets the idea across.

So, how much slower is this?

Well, there's no per-iteration cost, because the code inside the loop is the same as ever.

There is a tiny bit of constant overhead from the SETUP_EXCEPT. It's around 12ns on a machine where a simple listcomp takes around 500ns + 100ns/iteration. So, we're talking under 1% overhead for most cases. There's also probably some cost from loading a larger function and jumping a bit farther, although I haven't been able to measure it.

If you actually raise StopIteration, or course, that slows things down by maybe 250ns, but since that didn't work before, you can't complain that it's slower.

If you raise anything else, it also adds a similar amount of time (a bit harder to measure), but I don't think anyone cares about the performance of list comprehensions that fail by raising.

Meanwhile, the required changes to CPython are all in one function in compile.c, and not very complicated.

If we were willing to add a new opcode, we could add a SETUP_STOP_ITERATION, which jumps on StopIteration and ignores any other exception. Then we only need one new line in the code:

    BUILD_LIST
    LOAD_FAST .0
    SETUP_STOP_ITERATION
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

This obviously make the compiler simpler, but it does so at the cost of a new bytecode, which really just moves the complexity somewhere else--and somewhere less desirable. (Adding a new bytecode is a bigger change than just changing the compiler to compile different bytecode.) And it wouldn't be any faster for the typical fast path (no exceptions). It might be a little faster when exceptions are raised, and it does save a few bytes in the compiled bytecode, but I don't think that's worth it.

Optimizing list(genexpr)

If we're going to simplify the language, wouldn't it be nice to also simplify the implementation? Can't we get rid of the code to build magic listcomp functions, and maybe even the special BUILD_LIST and LIST_APPEND opcodes?

We need to know why it's 40% slower before we can fix it. It's not because of the call to list. In fact, we can inline the list building:

    LOAD_CONST 
    LOAD_CONST ""
    MAKE_FUNCTION 0
    LOAD_NAME x
    GET_ITER
    CALL_FUNCTION 1
    BUILD_LIST
    :loop
    FOR_ITER :endloop
    LIST_APPEND 2
    JUMP_ABSOLUTE :loop
    :endloop

This shaves off a few nanoseconds of constant cost and a few nanoseconds per iteration, but doesn't make much of a dent in the 40%.

The real cost here is we have to go back and forth between the FOR_ITER and the inner function's YIELD once per iteration. In other words, we're doing a generator suspend and resume for each iteration.

So, what we need is some new FAST_FOR_ITER and FAST_YIELD that can trade off within the same function. And we'll also need a FAST_RETURN, of course.

So, FAST_FOR_ITER has to jump to the inlined generator. It has nowhere to put an extra operand, but that's fine; since it doesn't ever directly run its own outer body, we can just put the inlined generator right after it. Next, FAST_YIELD has to jump to the outer loop body. Then, the outer loop body has to jump to the line after FAST_YIELD, instead of all the way back to the FAST_FOR_ITER. The inner loop has to jump to the outer FAST_FOR_ITER instead of its own FOR_ITER:

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    FAST_YIELD :outerbody
    :innercontinue
    POP_TOP
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    LOAD_CONST None
    FAST_RETURN :outerloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

What exactly is FAST_FOR_ITER doing here? It's not really iterating anything; it just jumps to :outerendloop if you've raised StopIteration or called FAST_RETURN, and falls through to the next line otherwise.

I'm not sure how it can even know whether it's gotten here as a result of a FAST_RETURN, or a FAST_YIELD that's been processed inline... but as it turns out, we can just optimize out the FAST_RETURN, because all we're ever going to do is ignore what we got and jump to :outerendloop. So it doesn't really matter how we'd implement it; let's just replace it with a JUMP_FORWARD to :outerendloop.

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    FAST_YIELD :outerbody
    :innercontinue
    POP_TOP
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    JUMP_FORWARD :outerendloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

But now, what exactly is FAST_YIELD doing? Basically it's just doing a DUP_TOP and a JUMP_RELATIVE. And we don't need the DUP_TOP, because we don't actually need the value after we jump back here--all we do is POP_TOP it. So:

    BUILD_LIST
    LOAD_FAST .0
    :outerloop
    FAST_FOR_ITER :outerendloop
    FOR_ITER :innerendloop
    JUMP_FORWARD :outerbody
    :innercontinue
    JUMP_ABSOLUTE :outerloop
    :innerendloop
    JUMP_FORWARD :outerendloop
    :outerbody
    LIST_APPEND 1
    JUMP_ABSOLUTE :innercontinue
    :outerendloop
    RETURN_VALUE

Now we've got all these lines that just jump to other jumps, so we can optimize them all out:

    BUILD_LIST
    LOAD_FAST .0
    :loop
    FAST_FOR_ITER :endloop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :endloop
    :outerendloop
    RETURN_VALUE

And now, this is identical to the original listcomp code, except for that outer FAST_FOR_ITER opcode.

And what exactly is it doing? Basically, if you've raised StopIteration it jumps to :outerendloop. But there's no need to ever jump back to it for it to serve that purpose; it can work just like SETUP_EXCEPT. In fact, it's exactly the same as the SETUP_STOP_ITERATION above. So, let's replace it, and move the jump:

    BUILD_LIST
    LOAD_FAST .0
    SETUP_STOP_ITERATION :endloop
    :loop
    FOR_ITER :endloop
    LIST_APPEND 1
    JUMP_ABSOLUTE :loop
    :endloop
    RETURN_VALUE

And that's exactly the same code we had for adding StopIteration handling to [listcomp].
So, yes, you can inline and optimize list(genexpr), but the result is exactly the same as adding StopIteration handling to [listcomp].
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.

5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I

6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").

1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).

6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'

1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.

3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall

1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

1

Many languages have a for-each loop.

4

When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.

2

In a previous post, I explained in detail how lookup works in Python.

2

The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.

7

In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.

2

I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.

2

In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.

1

Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.

1

About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.

About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.

1

In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.

2

There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.

8

Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.

2

There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.

2

When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.

Let's say you have a good idea for a change to Python.

1

There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.

2

In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.

1

Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.

2

If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.

Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).

There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.

1

Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.

1

Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.

5

Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.

3

This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()

11
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.