How grouper works

A very common question on StackOverflow is: "How do I split a sequence into evenly-sized chunks?"

If it's actually a sequence, rather than an arbitrary iterable, you can do this with slicing. But the itertools documentation has a nifty way to do it completely generally, with a recipe called grouper.

As soon as someone sees the recipe, they know how to use it, and it solves their problem. But, even though it's very short and simple, most people don't understand how it works. And it's really worth taking a look at it, because if you can muddle your way through grouper, you can understand a lot of more complicated iterator-based programming.

Pairs

Let's start with a simpler function to group an even-length iterator over objects into an iterator over pairs of objects:
    def pairs(iterable):
        it = iter(iterable)
        return zip(it, it)
How does this work?

First, we make an iterator over the iterable. An iterator is an iterable that keeps track of its current position. The most familiar iterators are the things returned by generator functions, generator expressions, map, filter, zip, the functions in itertools, etc. But you can create an iterator for any iterable with the iter function. For example:
    >>> a = range(5) # not an iterator
    >>> list(a)
    [0, 1, 2, 3, 4]
    >>> list(a)
    [0, 1, 2, 3, 4]
    >>> i = iter(a) # is an iterator
    >>> list(i)
    [0, 1, 2, 3, 4]
    >>> list(i)
    []
Since we've already consumed i in the first list call, there's nothing left in it for the second call. This might be a little easier to see with a function like islice or takewhile that only consumes part of the iterator:
    >>> i = iter(a)
    >>> list(islice(i, 3))
    [0, 1, 2]
    >>> list(islice(i, 3))
    [3, 4]
You may wonder what happens if a was already an iterator. That's perfectly fine: in that case, iter just returns a itself.

Anyway, if we have two references to the same iterator, and we advance one reference, the other one has of course also been advanced (since it's the same iterator). Having two separate iterators over the same iterable doesn't do that. For example:
    >>> i1, i2 = iter(a), iter(a) # two separate iterators
    >>> list(i1)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    >>> list(i2)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    >>> i1 = i2 = iter(a) # two references to the same iterator
    >>> list(i1)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    >>> list(i2)
    []
(Of course in this case, if a had already been an iterator, calling iter(a) twice would have given us back the same iterator (a itself) twice, so the first example would be the same as the second.)

So, what happens if you zip the two references to the same iterator together? Each one gets every other value:
    >>> i1 = i2 = iter(a)
    >>> list(zip(i1, i2))
    [(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
If it isn't obvious why, work through what zip does—or this simplified pure-Python zip:
    def fake_zip(i1, i2):
        while True:
            v1 = next(i1)
            v2 = next(i2)
            yield v1, v2
If i1 and i2 are the same iterator, after v1 = next(i1), i1 and i2 will be pointing to the next value after v1, so v2 = next(i2) will get that value.

And that's all there is to the pairs function.

Arbitrary-sized chunks

So, how do we make n references to the same iterator? There are a few ways to do it, but the simplest is:
    args = [iter(iterable)] * n
And now, how do we zip them together? Since zip takes any number of arguments, not just two, you just use argument list unpacking:
    zip(*args)
And now we can almost write grouper:
    def grouper(iterable, n):
        args = [iter(iterable)] * n
        return zip(*args)

Uneven chunks

Finally, what if the number of items isn't divisible into evenly-sized chunks? For example, what if you want to group range(10) into groups of 3? There are a few possible answers:

  1. [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, None, None)]
  2. [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 0, 0)]
  3. [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9,)]
  4. [(0, 1, 2), (3, 4, 5), (6, 7, 8)]
  5. ValueError

By using zip, we get the fourth one: an incomplete group just doesn't appear at all. Sometimes that's exactly what you want. But most of the time, it's probably one of the first two that you actually want.

For that purpose, itertools has a function called zip_longest. It fills in the missing values with None, or with the fillvalue argument if you pass one in. So:
    >>> list(zip_longest(*iters, fillvalue=0))
    [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 0, 0)]
And now we've got everything we need to write, and understand, grouper:
    def grouper(iterable, n, fillvalue=None):
        "Collect data into fixed-length chunks or blocks"
        # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
        args = [iter(iterable)] * n
        return zip_longest(*args, fillvalue=fillvalue)
And if you want to, e.g., use zip instead of zip_longest, you know how to do it.

1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.