I was recently trying to optimize some C code using cachegrind, and discovered that branch misprediction in an inner loop was the culprit. I began wondering how much anything similar could affect Python code. After all, especially in CPython, every opcode is going through hundreds of lines of ceval code, with a giant switch and multiple branches, and so on, so, how much difference could a branch at the Python level make?

To find out, I turned to the famous Stack Overflow question Why is processing a sorted array faster than an unsorted array? That question has a simple C++ test case where the code runs ~6x as fast on x86 and x86_64 platforms with most compilers, and it's been verified to be similar in other compiled languages like Java, C#, and go.

So, what about Python?

The code

The original question takes a 32K array of bytes, and sums the bytes that are >= 128:
    long long sum = 0;
    /* ... */
    for (unsigned c = 0; c < arraySize; ++c)
    {
        if (data[c] >= 128)
            sum += data[c];
    }
 
In Python terms:
    total = 0
    for i in data:
        if i >= 128:
            total += i
When run with a random array of bytes, this usually takes about 6x as long as when run with the same array pre-sorted. (Your mileage may vary, because there's a way the compiler can optimize away the loop for you, at least on x86 and x86_64, and some compilers will, depending on your optimization settings, figure that out for you. If you want to know more, read the linked question.)

So, here's a simple test driver:
    import functools
    import random
    import timeit

    def sumbig(data):
        total = 0
        for i in data:
            if i >= 128:
                total += i
        return total

    def test(asize, reps):
        data = bytearray(random.randrange(256) for _ in range(asize)
        t0 = timeit.timeit(functools.partial(sumbig, data), number=reps)
        t1 = timeit.timeit(functools.partial(sumbig, bytearray(sorted(data))), number=reps)
        print(t0, t1)

    if __name__ == '__main__':
        import sys
        asize = int(sys.argv[1]) if len(sys.argv) > 1 else 32768
        reps = int(sys.argv[2]) if len(sys.argv) > 2 else 1000
        test(asize, reps)
I tested this on a few different computers, using Apple pre-installed CPython 2.7.6, Python.org CPython 2.7.8 and 3.4.1, Homebrew CPython 3.4.0, and a custom build off trunk, and it pretty consistently saves about 12% to pre-sort the array. Nothing like the 84% savings in C++, but still, a lot more than you'd expect. After all, we're doing roughly 65x as much work in the CPython ceval loop than we were doing in the C++ loop, so you'd think the difference would be lost in the noise, and we're also doing much larger loops, so you'd think branch prediction wouldn't be helping as much in the first place. But if you watch the BR_MISP counters in Instruments, or do the equivalent with cachegrind, you'll see that it's mispredicting a lot more branches for the unsorted case than the sorted case—as in 1.9% of the total conditional branches in the ceval loop instead of 0.1%. Presumably, even though this is still pretty small, and nothing like what you see in C, the cost of the misprediction is also higher? It's hard to be sure…

You'd expect a much bigger benefit in the other Python implementations. Jython and IronPython compile to JVM and ILR code, which then gets the same kind of JIT recompilation as Java and C#, so if those JITs aren't smart enough to optimize away the branch with a conditional move instruction for Java and C#, they won't be for Python code either. PyPy is JIT-compiling directly from Python—plus, its JIT is itself driven by tracing, which can be hurt by the equivalent of branch misprediction at a higher level. And in fact I found a difference of 54% in Jython 2.5.1, 68% in PyPy 2.3.1 (both the 2.7 and 3.2 versions), and 72% in PyPy 2.4.0 (2.7 only).

C++ optimizations


There are a number of ways to optimize this in C, but they all amount to the same thing: find some way to do a bit of extra work to avoid the conditional branch—bit-twiddling arithmetic, a lookup table, etc. The lookup table seems like the most likely version to help in Python, so here it is:
    table = bytearray(i if i >= 128 else 0 for i in range(256))
    total = 0
    for i in a:
        total + table[i]
To put this inside a function, you'd want to build the table globally instead of once per call, then copy it to a local inside the function to avoid the slow name lookup inside the inner loop:
    _table = bytearray(i if i >= 128 else 0 for i in range(256))
    def sumbig_t(data):
        table = _table
        total = 0
        for i in a:
            total + table[i]
The sorted and unsorted arrays are now about the same speed. And for PyPy, that's about 13% faster than with the sorted data in the original version. For CPython, on the other hand, it's 33% slower. I also tried the various bit-twiddling optimizations; they're slightly slower than the lookup table in PyPy, and at least 250% slower in CPython.

So, what have we learned? Python, even CPython, can be affected by the same kinds of low-level performance problems as compiled languages. The alternative implementations can also handle those problems the same way. CPython can't… but then if this code were a bottleneck in your problem, you'd almost certainly be switching to PyPy or writing a C extension anyway, right?

Python optimizations

There's an obvious Python-specific optimization here—which I wouldn't even really call an optimization, since it's also a more Pythonic way of writing the code:
    total = sum(i for i in data if i >= 128)
This does in fact speed things up by about 13% in CPython, although it slows things down by 217% in PyPy. It leaves us with the same original difference between random and sorted arrays, and the same basic effects for applying the table or bit twiddling.

You'd think that taking two passes, applying the table to the data and then summing the result, would obviously be slower, right? And normally you'd be right; a quick test shows a 31% slowdown. But if you think about it, when we're dealing with bytes, applying the table can be done with bytes.translate. And of course the sum is now just summing a builtin sequence, not a generator. So we effectively have two C loops instead of one Python loop, which may be a win:
    def sumbig_s(data):
        return sum(data.translate(bytearray(_table)))
For CPython, this saves about 83% of the time vs. the original sumbig's speed on unsorted data. And the difference between sorted and unsorted data is very small, so it's still an 81% savings on sorted data. However, for PyPy, it's 4% slower for unsorted data, and you lose the gain for sorted data.

If you think about it, while we're doing that pass, we might as well just remove the small values instead of mapping them to 0:
    def sumbig_s2(data):
        return sum(data.translate(bytearray(_table), bytearray(range(128))))
That ought to make the first loop a little slower, and affected by branch misprediction, while making the second loop twice as fast, right? And that's exactly what we see, in CPython. For unsorted data, it cuts 90% off the original code, and now sorting it gives us another 36% improvement. That's near PyPy speeds. On the other hand, in PyPy, it's just as slow for sorted data as the previous fix, and twice as slow for unsorted data, so it's even more of a pessimization.

What about using numpy? The obvious implementation is:
    def sumbig_n(data):
        data = np.array(data)
        return data[data>=128].sum()
In CPython, this cuts 90% of the speed off the original code for unsorted data, and for sorted data it cuts off another 48%. Either way, it's the fastest solution so far, even faster than using PyPy. But we can do the equivalent of translating the if to arithmetic or bit-twiddling too. I tried tricks like ~((data-128)>>31)&data, but the fastest turned out to be the simplest:
    def sumbig_n2(data):
        data = np.array(data)
        return (data&(data>=128)).sum()
Now just as fast for unsorted data as for sorted, cutting 94% of the time off the original.

Conclusions

So, what does this mean for you?

Well, most of the time, nothing. If your code is too slow, and it's doing arithmetic inside a loop, and the algorithm itself can't be improved, the first step should almost always be converting to numpy, PyPy, or C, and often that'll be the last step as well.

But it's good to know that the same issues that apply to low-level code still affect Python, and in some cases (especially if you're already using numpy or PyPy) the same solutions may help.
3

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.