I was recently trying to optimize some C code using cachegrind, and discovered that branch misprediction in an inner loop was the culprit. I began wondering how much anything similar could affect Python code. After all, especially in CPython, every opcode is going through hundreds of lines of ceval code, with a giant switch and multiple branches, and so on, so, how much difference could a branch at the Python level make?

To find out, I turned to the famous Stack Overflow question Why is processing a sorted array faster than an unsorted array? That question has a simple C++ test case where the code runs ~6x as fast on x86 and x86_64 platforms with most compilers, and it's been verified to be similar in other compiled languages like Java, C#, and go.

So, what about Python?

The code

The original question takes a 32K array of bytes, and sums the bytes that are >= 128:
    long long sum = 0;
    /* ... */
    for (unsigned c = 0; c < arraySize; ++c)
    {
        if (data[c] >= 128)
            sum += data[c];
    }
 
In Python terms:
    total = 0
    for i in data:
        if i >= 128:
            total += i
When run with a random array of bytes, this usually takes about 6x as long as when run with the same array pre-sorted. (Your mileage may vary, because there's a way the compiler can optimize away the loop for you, at least on x86 and x86_64, and some compilers will, depending on your optimization settings, figure that out for you. If you want to know more, read the linked question.)

So, here's a simple test driver:
    import functools
    import random
    import timeit

    def sumbig(data):
        total = 0
        for i in data:
            if i >= 128:
                total += i
        return total

    def test(asize, reps):
        data = bytearray(random.randrange(256) for _ in range(asize)
        t0 = timeit.timeit(functools.partial(sumbig, data), number=reps)
        t1 = timeit.timeit(functools.partial(sumbig, bytearray(sorted(data))), number=reps)
        print(t0, t1)

    if __name__ == '__main__':
        import sys
        asize = int(sys.argv[1]) if len(sys.argv) > 1 else 32768
        reps = int(sys.argv[2]) if len(sys.argv) > 2 else 1000
        test(asize, reps)
I tested this on a few different computers, using Apple pre-installed CPython 2.7.6, Python.org CPython 2.7.8 and 3.4.1, Homebrew CPython 3.4.0, and a custom build off trunk, and it pretty consistently saves about 12% to pre-sort the array. Nothing like the 84% savings in C++, but still, a lot more than you'd expect. After all, we're doing roughly 65x as much work in the CPython ceval loop than we were doing in the C++ loop, so you'd think the difference would be lost in the noise, and we're also doing much larger loops, so you'd think branch prediction wouldn't be helping as much in the first place. But if you watch the BR_MISP counters in Instruments, or do the equivalent with cachegrind, you'll see that it's mispredicting a lot more branches for the unsorted case than the sorted case—as in 1.9% of the total conditional branches in the ceval loop instead of 0.1%. Presumably, even though this is still pretty small, and nothing like what you see in C, the cost of the misprediction is also higher? It's hard to be sure…

You'd expect a much bigger benefit in the other Python implementations. Jython and IronPython compile to JVM and ILR code, which then gets the same kind of JIT recompilation as Java and C#, so if those JITs aren't smart enough to optimize away the branch with a conditional move instruction for Java and C#, they won't be for Python code either. PyPy is JIT-compiling directly from Python—plus, its JIT is itself driven by tracing, which can be hurt by the equivalent of branch misprediction at a higher level. And in fact I found a difference of 54% in Jython 2.5.1, 68% in PyPy 2.3.1 (both the 2.7 and 3.2 versions), and 72% in PyPy 2.4.0 (2.7 only).

C++ optimizations


There are a number of ways to optimize this in C, but they all amount to the same thing: find some way to do a bit of extra work to avoid the conditional branch—bit-twiddling arithmetic, a lookup table, etc. The lookup table seems like the most likely version to help in Python, so here it is:
    table = bytearray(i if i >= 128 else 0 for i in range(256))
    total = 0
    for i in a:
        total + table[i]
To put this inside a function, you'd want to build the table globally instead of once per call, then copy it to a local inside the function to avoid the slow name lookup inside the inner loop:
    _table = bytearray(i if i >= 128 else 0 for i in range(256))
    def sumbig_t(data):
        table = _table
        total = 0
        for i in a:
            total + table[i]
The sorted and unsorted arrays are now about the same speed. And for PyPy, that's about 13% faster than with the sorted data in the original version. For CPython, on the other hand, it's 33% slower. I also tried the various bit-twiddling optimizations; they're slightly slower than the lookup table in PyPy, and at least 250% slower in CPython.

So, what have we learned? Python, even CPython, can be affected by the same kinds of low-level performance problems as compiled languages. The alternative implementations can also handle those problems the same way. CPython can't… but then if this code were a bottleneck in your problem, you'd almost certainly be switching to PyPy or writing a C extension anyway, right?

Python optimizations

There's an obvious Python-specific optimization here—which I wouldn't even really call an optimization, since it's also a more Pythonic way of writing the code:
    total = sum(i for i in data if i >= 128)
This does in fact speed things up by about 13% in CPython, although it slows things down by 217% in PyPy. It leaves us with the same original difference between random and sorted arrays, and the same basic effects for applying the table or bit twiddling.

You'd think that taking two passes, applying the table to the data and then summing the result, would obviously be slower, right? And normally you'd be right; a quick test shows a 31% slowdown. But if you think about it, when we're dealing with bytes, applying the table can be done with bytes.translate. And of course the sum is now just summing a builtin sequence, not a generator. So we effectively have two C loops instead of one Python loop, which may be a win:
    def sumbig_s(data):
        return sum(data.translate(bytearray(_table)))
For CPython, this saves about 83% of the time vs. the original sumbig's speed on unsorted data. And the difference between sorted and unsorted data is very small, so it's still an 81% savings on sorted data. However, for PyPy, it's 4% slower for unsorted data, and you lose the gain for sorted data.

If you think about it, while we're doing that pass, we might as well just remove the small values instead of mapping them to 0:
    def sumbig_s2(data):
        return sum(data.translate(bytearray(_table), bytearray(range(128))))
That ought to make the first loop a little slower, and affected by branch misprediction, while making the second loop twice as fast, right? And that's exactly what we see, in CPython. For unsorted data, it cuts 90% off the original code, and now sorting it gives us another 36% improvement. That's near PyPy speeds. On the other hand, in PyPy, it's just as slow for sorted data as the previous fix, and twice as slow for unsorted data, so it's even more of a pessimization.

What about using numpy? The obvious implementation is:
    def sumbig_n(data):
        data = np.array(data)
        return data[data>=128].sum()
In CPython, this cuts 90% of the speed off the original code for unsorted data, and for sorted data it cuts off another 48%. Either way, it's the fastest solution so far, even faster than using PyPy. But we can do the equivalent of translating the if to arithmetic or bit-twiddling too. I tried tricks like ~((data-128)>>31)&data, but the fastest turned out to be the simplest:
    def sumbig_n2(data):
        data = np.array(data)
        return (data&(data>=128)).sum()
Now just as fast for unsorted data as for sorted, cutting 94% of the time off the original.

Conclusions

So, what does this mean for you?

Well, most of the time, nothing. If your code is too slow, and it's doing arithmetic inside a loop, and the algorithm itself can't be improved, the first step should almost always be converting to numpy, PyPy, or C, and often that'll be the last step as well.

But it's good to know that the same issues that apply to low-level code still affect Python, and in some cases (especially if you're already using numpy or PyPy) the same solutions may help.
3

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.

5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I

6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").

1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).

6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'

1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.

3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall

1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

1

Many languages have a for-each loop.

4

When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.

2

In a previous post, I explained in detail how lookup works in Python.

2

The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.

7

In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.

2

I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.

2

In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.

1

Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.

1

About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.

About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.

1

In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.

2

There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.

8

Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.

2

There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.

2

When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.

Let's say you have a good idea for a change to Python.

1

There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.

2

In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.

1

Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.

2

If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.

Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).

There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.

1

Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.

1

Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.

5

Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.

3

This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()

11
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.