It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.
I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I
Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").
Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).
If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.
Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'

Many people, when they first discover the heapq module, have two questions:

  • Why does it define a bunch of functions instead of a container type?
  • Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python?

Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.

For example, if you want to build something like nlargest, it's usually easier to understand that larger algorithm in terms of calling heappush and heappop on a list, than in terms of using a heap object. (And certainly, things like nlargest and merge make no sense as methods of a heap object—the fact that they use one internally is irrelevant to the caller.)

And the same goes for building data types that use heaps: you might want a timer queue as a class, but that class's implementation is going to be more readable using heappush and heappop than going through the extra abstraction of a heap class.

Also, even when you think of a heap as a data type, it doesn't really fit in with Python's notion of collection types, or most other notions. Sure, you can treat it as a sequence—but if you do, its values are in arbitrary order, which defeats the purpose of using a heap. You can only access it in sorted order by doing so destructively. Which makes it, in practice, a one-shot sorted iterable—which is great for building iterators on top of (like merge), but kind of useless for storing as a collection in some other object. Meanwhile, it's mutable, but doesn't provide any of the mutation methods you'd expect from a mutable sequence. It's a little closer to a mutable set, because at least it has an equivalent to add--but that doesn't fit either, because you can't conceptually (or efficiently, even if you do want to break the abstraction) remove arbitrary values from a heap.

But maybe the best reason not to use a Heap type is the answer to the next question.

Why no keys?

Almost everything sorting-related in Python follows list.sort in taking two optional parameters: a key that can be used to transform each value before comparing them, and a reverse flag that reverses the sort order. But heapq doesn't.

(Well, actually, the higher-level functions in heapq do--you can use a key with merge or nlargest. You just can't use them with heappush and friends.)

So, why not?

Well, consider writing nlargest yourself, or a heapsort, or a TimerQueue class. Part of the point of a key function is that it only gets called once on each value. But you're going to call heappush and heappop N times, and each time it's going to have to look at about log N values, so if you were applying the key in heappush and heappop, you'd be applying it about log N times to each value, instead of just once.

So, the right place to put the key function is in whatever code wraps up the heap in some larger algorithm or data structure, so it can decorate the values as they go into the heap, and undecorate them as they come back out. Which means the heap itself doesn't have to understand anything about decoration.

Examples

The heapq docs link to the source code for the module, which has great comments explaining how everything works. But, because the code is also meant to be as optimized and as general as possible, it's not as simple as possible. So, let's look at some simplified algorithms using heaps.

Sort

You can easily sort objects using a heap, just by either heapifying the list and popping, or pushing the elements one by one and popping. Both have the same log-linear algorithmic complexity as most other decent sorts (quicksort, timsort, plain mergesort, etc.), but generally with a larger constant (and obviously the one-by-one has a larger constant than heapifying).

def heapsort(iterable):
    heap = list(iterable)
    heapq.heapify(heap)
    while heap:
        yield heapq.heappop(heap)
Now, adding a key is simple:
def heapsort(iterable, key):
    heap = [(key(x), x) for x in iterable]
    heapq.heapify(heap)
    while heap:
        yield heapq.heappop(heap)[1]
Try calling list(heapsort(range(100), str)) and you'll see the familiar [0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, ...] that you usually only get when you don't want it.

If the values aren't comparable, or if you need to guarantee a stable sort, you can use [(key(x), i, x) for i, x in enumerate(iterable)]. That way, two values that have the same key will be compared based on their original index, rather than based on their value. (Alternatively, you could build a namedtuple around (key(x), x) then override its comparison to ignore the x, which saves the space for storing those indices, but takes more code, probably runs slower, and doesn't provide a stable sort.) The same is true for the examples below, but I generally won't bother doing it, because the point here is to keep things simple.

nlargest

To get the largest N values from any iterable, all you need to do is keep track of the largest N so far, and whenever you find a bigger one, drop the smallest of those N.
def nlargest(iterable, n):
    heap = []
    for value in iterable:
        heapq.heappush(heap, value)
        if len(heap) > n:
            heapq.heappop()
    return heap
To add a key:
def nlargest(iterable, n, key):
    heap = []
    for value in iterable:
        heapq.heappush(heap, (key(value), value))
        if len(heap) > n:
            heapq.heappop()
    return [kv[1] for kv in heap]
This isn't stable, and gives you the top N in arbitrary rather than sorted order, and there's lots of scope for optimization here (again, the heapq.py source code is very well commented, so go check it out), but this is the basic idea.

One thing you might notice here is that, while collections.deque has a nice maxlen attribute that lets you just push things on the end without having to check the length and pop off the back, heapq doesn't. In this case, it's not because it's useless or complicated or potentially inefficient, but because it's so trivial to add yourself:
def heappushmax(heap, value, maxlen):
    if len(heap) >= maxlen:
        heapq.heappushpop(heap, value)
    else:
        heapq.heappush(heap, value)
And then:
def nlargest(iterable, n):
    heap = []
    for value in iterable:
        heappushmax(heap, value, n)
    turn heap

merge

To merge (pre-sorted) iterables together, it's basically just a matter of sticking their iterators in a heap, with their next value as a key, and each time we pop one off, we put it back on keyed by the next value:
def merge(*iterables):
    iterators = map(iter, iterables)
    heap = [(next(it), i, it) 
            for i, it in enumerate(iterators)]
    heapq.heapify(heap)
    while heap:
        nextval, i, it = heapq.heappop(heap)
        yield nextval
        try:
            nextval = next(it)
        except StopIteration:
            pass
        else:
            heapq.heappush(heap, (nextval, i, it))
Here, I did include the index, because most iterables either aren't comparable or are expensive to compare, so it's a bit more serious of an issue if two of them have the same key (next element).

(Note that this implementation won't work if some of the iterables can be empty, but if you want that, it should be obvious how to do the same thing we do inside the loop.)

What if we want to attach a key to the values as well? The only tricky bit is that we want to transform each value of each iterable, which is one of the few good cases for a nested comprehension: use the trivial comprehension (key(v), v) for v in iterable in place of the iter function.
def merge(*iterables, key):
    iterators = ((key(v), v) for v in iterable) for iterable in iterables)
    heap = [(next(it), i, it) for i, it in enumerate(iterators)]
    heapq.heapify(heap)
    while heap:
        nextval, i, it = heapq.heappop(heap)
        yield nextval[-1]
        try:
            nextval = next(it)
        except StopIteration:
            pass
        else:
            heapq.heappush(heap, (nextval, i, it))
Again, there are edge cases to handle and optimizations to be had, which can be found in the module's source, but this it the basic idea.

Summary

Hopefully all of these examples show why the right place to insert a key function into a heap-based algorithm is not at the level of heappush, but at the level of the higher-level algorithm.
1

View comments

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.
One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall
A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.
Many languages have a for-each loop.
When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.
In a previous post, I explained in detail how lookup works in Python.
The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.
In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.
I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.
In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.
Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.
About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.
About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.
In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.
There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.
Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.
There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.
When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.
Let's say you have a good idea for a change to Python.
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.