Many people, when they first discover the heapq module, have two questions:

  • Why does it define a bunch of functions instead of a container type?
  • Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python?

Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.

For example, if you want to build something like nlargest, it's usually easier to understand that larger algorithm in terms of calling heappush and heappop on a list, than in terms of using a heap object. (And certainly, things like nlargest and merge make no sense as methods of a heap object—the fact that they use one internally is irrelevant to the caller.)

And the same goes for building data types that use heaps: you might want a timer queue as a class, but that class's implementation is going to be more readable using heappush and heappop than going through the extra abstraction of a heap class.

Also, even when you think of a heap as a data type, it doesn't really fit in with Python's notion of collection types, or most other notions. Sure, you can treat it as a sequence—but if you do, its values are in arbitrary order, which defeats the purpose of using a heap. You can only access it in sorted order by doing so destructively. Which makes it, in practice, a one-shot sorted iterable—which is great for building iterators on top of (like merge), but kind of useless for storing as a collection in some other object. Meanwhile, it's mutable, but doesn't provide any of the mutation methods you'd expect from a mutable sequence. It's a little closer to a mutable set, because at least it has an equivalent to add--but that doesn't fit either, because you can't conceptually (or efficiently, even if you do want to break the abstraction) remove arbitrary values from a heap.

But maybe the best reason not to use a Heap type is the answer to the next question.

Why no keys?

Almost everything sorting-related in Python follows list.sort in taking two optional parameters: a key that can be used to transform each value before comparing them, and a reverse flag that reverses the sort order. But heapq doesn't.

(Well, actually, the higher-level functions in heapq do--you can use a key with merge or nlargest. You just can't use them with heappush and friends.)

So, why not?

Well, consider writing nlargest yourself, or a heapsort, or a TimerQueue class. Part of the point of a key function is that it only gets called once on each value. But you're going to call heappush and heappop N times, and each time it's going to have to look at about log N values, so if you were applying the key in heappush and heappop, you'd be applying it about log N times to each value, instead of just once.

So, the right place to put the key function is in whatever code wraps up the heap in some larger algorithm or data structure, so it can decorate the values as they go into the heap, and undecorate them as they come back out. Which means the heap itself doesn't have to understand anything about decoration.

Examples

The heapq docs link to the source code for the module, which has great comments explaining how everything works. But, because the code is also meant to be as optimized and as general as possible, it's not as simple as possible. So, let's look at some simplified algorithms using heaps.

Sort

You can easily sort objects using a heap, just by either heapifying the list and popping, or pushing the elements one by one and popping. Both have the same log-linear algorithmic complexity as most other decent sorts (quicksort, timsort, plain mergesort, etc.), but generally with a larger constant (and obviously the one-by-one has a larger constant than heapifying).

def heapsort(iterable):
    heap = list(iterable)
    heapq.heapify(heap)
    while heap:
        yield heapq.heappop(heap)
Now, adding a key is simple:
def heapsort(iterable, key):
    heap = [(key(x), x) for x in iterable]
    heapq.heapify(heap)
    while heap:
        yield heapq.heappop(heap)[1]
Try calling list(heapsort(range(100), str)) and you'll see the familiar [0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, ...] that you usually only get when you don't want it.

If the values aren't comparable, or if you need to guarantee a stable sort, you can use [(key(x), i, x) for i, x in enumerate(iterable)]. That way, two values that have the same key will be compared based on their original index, rather than based on their value. (Alternatively, you could build a namedtuple around (key(x), x) then override its comparison to ignore the x, which saves the space for storing those indices, but takes more code, probably runs slower, and doesn't provide a stable sort.) The same is true for the examples below, but I generally won't bother doing it, because the point here is to keep things simple.

nlargest

To get the largest N values from any iterable, all you need to do is keep track of the largest N so far, and whenever you find a bigger one, drop the smallest of those N.
def nlargest(iterable, n):
    heap = []
    for value in iterable:
        heapq.heappush(heap, value)
        if len(heap) > n:
            heapq.heappop()
    return heap
To add a key:
def nlargest(iterable, n, key):
    heap = []
    for value in iterable:
        heapq.heappush(heap, (key(value), value))
        if len(heap) > n:
            heapq.heappop()
    return [kv[1] for kv in heap]
This isn't stable, and gives you the top N in arbitrary rather than sorted order, and there's lots of scope for optimization here (again, the heapq.py source code is very well commented, so go check it out), but this is the basic idea.

One thing you might notice here is that, while collections.deque has a nice maxlen attribute that lets you just push things on the end without having to check the length and pop off the back, heapq doesn't. In this case, it's not because it's useless or complicated or potentially inefficient, but because it's so trivial to add yourself:
def heappushmax(heap, value, maxlen):
    if len(heap) >= maxlen:
        heapq.heappushpop(heap, value)
    else:
        heapq.heappush(heap, value)
And then:
def nlargest(iterable, n):
    heap = []
    for value in iterable:
        heappushmax(heap, value, n)
    turn heap

merge

To merge (pre-sorted) iterables together, it's basically just a matter of sticking their iterators in a heap, with their next value as a key, and each time we pop one off, we put it back on keyed by the next value:
def merge(*iterables):
    iterators = map(iter, iterables)
    heap = [(next(it), i, it) 
            for i, it in enumerate(iterators)]
    heapq.heapify(heap)
    while heap:
        nextval, i, it = heapq.heappop(heap)
        yield nextval
        try:
            nextval = next(it)
        except StopIteration:
            pass
        else:
            heapq.heappush(heap, (nextval, i, it))
Here, I did include the index, because most iterables either aren't comparable or are expensive to compare, so it's a bit more serious of an issue if two of them have the same key (next element).

(Note that this implementation won't work if some of the iterables can be empty, but if you want that, it should be obvious how to do the same thing we do inside the loop.)

What if we want to attach a key to the values as well? The only tricky bit is that we want to transform each value of each iterable, which is one of the few good cases for a nested comprehension: use the trivial comprehension (key(v), v) for v in iterable in place of the iter function.
def merge(*iterables, key):
    iterators = ((key(v), v) for v in iterable) for iterable in iterables)
    heap = [(next(it), i, it) for i, it in enumerate(iterators)]
    heapq.heapify(heap)
    while heap:
        nextval, i, it = heapq.heappop(heap)
        yield nextval[-1]
        try:
            nextval = next(it)
        except StopIteration:
            pass
        else:
            heapq.heappush(heap, (nextval, i, it))
Again, there are edge cases to handle and optimizations to be had, which can be found in the module's source, but this it the basic idea.

Summary

Hopefully all of these examples show why the right place to insert a key function into a heap-based algorithm is not at the level of heappush, but at the level of the higher-level algorithm.
1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.