The itertools module in the standard library comes with a nifty groupby function to group runs of equal values together.

If you wanted to group runs of adjacent values instead, that should be easy, right?

Let's give a concrete example:
    >>> runs([0, 1, 2, 3, 5, 6, 7, 10, 11, 13, 16])
    [(0, 3), (5, 7), (10, 11), (13, 13), (16, 16)]
If we could make groupby give us (0, 1, 2, 3), then (5, 6, 7), etc.—this would be trivial.

Unfortunately, that's easier said than done.

What's the key?

To customize groupby, you give it the same kind of key function as all of the sorting-related functions: a function that takes each value and produces a key that can be compared to another value's key. So, if you want to group 1 and 2 into the same run… what key does that?

It may not be immediately obvious how to build such a key. But the functools module comes with a helper called cmp_to_key that does that automatically. You give it an "old-style comparison function"—that is, a function on two arguments that returns -1, 0, or 1 depending on whether the left argument is less than, equal to, or greater than the right—and gives you a key function. And that comparison function is obvious.

But if you try it, this doesn't actually work:
    >>> def adjacent_cmp(x, y):
    ...     if x+1 < y: return -1
    ...     elif x > y: return 1
    ...     else: return 0
    >>> adjacent_key = functools.cmp_to_key(adjacent_cmp)
    >>> a = [0, 1, 2, 3, 5, 6, 7, 10, 11, 13, 16]
    >>> [list(g) for k, g in itertools.groupby(a, adjacent_key)]
    [[0, 1], [2, 3], [5, 6], [7], [10, 11], [13], [16]]
It groups 0 and 1 together, but 2 isn't part of the group. Why?

Because groupby is remembering the first key in the group, and testing each new value against it, not remembering the most recent key in the group. Since the whole point of the key function is that the first key and the most recent key are equal, it's perfectly within its rights to do this—and, since it makes the code a little simpler and a little more efficient, it makes sense that it would.

There are two ways to fix this: We could change groupby (after all, the documentation comes with a pure Python equivalent) to remember the most recent key in the group instead of the first one, or we could create a stateful key callable that caches the last value that compared equal instead of its initial value. The second one seems hackier and harder, so let's start with the first.

Customizing groupby

First, let's just take the code out of the docs and run it:
    >>> [list(g) for k, g in itertools.groupby(a, adjacent_key)]
    TypeError: other argument must be K instance
It's not actually equivalent after all! What's the difference?

The Python implementation has lines like this:
    while self.currkey == self.tgtkey:
That seems reasonable, but these values start off as an object() sentinel, and the key returned by a key function created by cmp_to_key doesn't know how to compare itself to anything but another key returned by such a function. It's K.__eq__ will raise a TypeError, and the == operator fallback code will take this to mean that means a K instance can only be compared to another K instance and generate the different TypeError we're ultimately seeing.

So, the first fix we have to make is to change this comparison, and a similar one a few lines down, to deal with the sentinel explicitly (which also means we have to store the sentinel so we can compare to it). I won't show the code for this or the next change, because you don't really need to see the broken code; the last version will have all of the changes.

So let's try it again:
    >>> [list(g) for k, g in groupby(a, functools.cmp_to_key(adjacent_cmp))]
    [[0], [1], [2], [3], [5], [6], [7], [10], [11], [13], [16]]
It's still not equivalent. What's different now?

If you look at the comparisons, they look like this:
    while self.currkey == self.tgtkey:
That's clearly backward. Obviously when you're doing an ==, it shouldn't matter which value is on which side… except that we've deliberately distorted the meaning of == so that being 1 less counts as the same, without almost making 1 more count as the same. We could, and probably should, change our cmp function to do this... but if we're already changing the "equivalent" pure Python groupby to actually be equivalent, we might as well change this too.

So, with that change, let's try again:
    >>> [list(g) for k, g in itertools.groupby(a, adjacent_key)]
    [[0, 1], [2, 3], [5, 6], [7], [10, 11], [13], [16]]
OK, now we're back to where we wanted to be. The change we wanted was to just remember the last key in the group instead of the first. Which means we need to change this line to remember the last key:
    self.currkey = self.keyfunc(self.currvalue)
And let's try it out:
    >>> [list(g) for k, g in groupby(a, functools.cmp_to_key(adjacent_cmp))]
    [[0, 1, 2, 3], [5, 6, 7], [10, 11], [13], [16]]
But wait, there's another problem. If the last key is part of a group, but doesn't compare equal to the start of that group, it gets repeated on its own. For example:
    >>> a = [1, 2, 3]
    >>> [list(g) for k, g in groupby(a, functools.cmp_to_key(adjacent_cmp))]
    [[1, 2, 3], [3]]
Why does this happen? Well, we're updating tgtkey within _grouper, but that doesn't affect self.tgtkey. As far as I can tell, the only reason _grouper takes tgtkey as an argument instead of using the attribute is a micro-optimization (local variable lookup is faster than attribute lookup), one that's rarely going to make any difference in your program's performance (especially considering all the other attribute lookups we're doing in the same loop). So, the easy fix is to not do that.

Putting it all together:
    class groupby:
        # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
        # [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
        def __init__(self, iterable, key=None):
            if key is None:
                key = lambda x: x
            self.keyfunc = key
            self.it = iter(iterable)
            self.sentinel = self.tgtkey = self.currkey = self.currvalue = object()
        def __iter__(self):
            return self
        def __next__(self):
            while (self.currkey is self.sentinel
                   or self.tgtkey is not self.sentinel
                   and self.tgtkey == self.currkey):
                self.currvalue = next(self.it)    # Exit on StopIteration
                self.currkey = self.keyfunc(self.currvalue)
            self.tgtkey = self.currkey
            return (self.currkey, self._grouper())
        def _grouper(self):
            while self.tgtkey is self.sentinel or self.tgtkey == self.currkey:
                yield self.currvalue
                self.currvalue = next(self.it)    # Exit on StopIteration
                self.tgtkey, self.currkey = self.currkey, self.keyfunc(self.currvalue)
At any rate, this has turned into a much more complicated solution than it originally appeared. Is there a better way? Stateful keys sounded a lot worse, but let's see if they really are.

Stateful keys

So, what would a stateful key look like? Well, let's look at what our existing key looks like. cmp_to_key is actually implemented in pure Python (with an optional C accelerator), and it looks like this:
    def cmp_to_key(mycmp):
        """Convert a cmp= function into a key= function"""
        class K(object):
            __slots__ = ['obj']
            def __init__(self, obj):
                self.obj = obj
            def __lt__(self, other):
                return mycmp(self.obj, other.obj) < 0
            def __gt__(self, other):
                return mycmp(self.obj, other.obj) > 0
            def __eq__(self, other):
                return mycmp(self.obj, other.obj) == 0
            def __le__(self, other):
                return mycmp(self.obj, other.obj) <= 0
            def __ge__(self, other):
                return mycmp(self.obj, other.obj) >= 0
            def __ne__(self, other):
                return mycmp(self.obj, other.obj) != 0
            __hash__ = None
        return K
So, we want to update self.obj = other whenever they're equal, in all six comparison functions.

But we already know from the groupby source that it never calls anything but ==. And really, we don't want this key function to work with anything that calls, say, <; the idea of "sorting by adjacency" doesn't sound as compelling (or even as coherent) as grouping by adjacency. So, let's just implement that one operator.

While we're at it, let's fix the problem mentioned above, that if you compare our keys backward you get the wrong answer. If we can make a==b and b==a always the same, that makes things even less hacky.
    class AdjacentKey(object):
        __slots__ = ['obj']
        def __init__(self, obj):
            self.obj = obj
        def __eq__(self, other):
            ret = self.obj - 1 <= other.obj <= self.obj + 1
            if ret:
                self.obj = other.obj
            return ret
Does it work?
    >>> [list(g) for k, g in itertools.groupby(a, AdjacentKey)]
    [[0, 1, 2, 3], [5, 6, 7], [10, 11], [13], [16]]
Yes, and on the first try. As it turns out, that was actually easier, not harder. Sometimes it's worth looking under the covers of the standard library.

From groups to runs

Now we can easily write the function we wanted in the first place:
    def first_and_last(iterable):
        start = end = next(iterable)
        for end in iterable: pass
        return start, end

    def runs(iterable):
        for k, g in itertools.groupby(iterable, AdjacentKey):
            yield first_and_last(g)

Beyond 1

We've baked the notion of adjacency into our key: within +/- 1. But you might want a larger or smaller cutoff. Or, for that matter, a different type of cutoff—e.g., timedelta(hours=24). Or you might want to apply a key function to the values before comparing them, which would make it easy to implement a "natural sorting" rule (like the Mac Finder), so "file001.jpg" and "file002.jpg" are adjacent. Or you might want a different comparison predicate all together—e.g., within +/- 1% instead of +/- 1.

All of these are easy to add:
    def adjacent_key(cutoff=1, key=None, predicate=None):
        if key is None:
            key = lambda v: v
        if predicate is None:
            def predicate(lhs, rhs):
                return lhs - cutoff <= rhs <= lhs + cutoff
        class K(object):
            __slots__ = ['obj']
            def __init__(self, obj):
                self.obj = obj
            def __eq__(self, other):
                ret = predicate(key(self.obj), key(other.obj))
                if ret:
                    self.obj = other.obj
                return ret
        return K
Let's test it out. (I'm going to from … import a bunch of things here to make it a bit more concise.)
    >>> [list(g) for k, g in groupby(a, adjacent_key(2))]
    [[0, 1, 2, 3, 5, 6, 7], [10, 11, 13], [16]]
    >>> b = [date(2001, 1, 1), date(2001, 2, 28), date(2001, 3, 1), 
             date(2001, 3, 2), date(2001, 6, 4)]
    >>> [list(g) for k, g in groupby(b, adjacent_key(timedelta(days=1)))]
    [[datetime.date(2001, 1, 1)], 
     [datetime.date(2001, 2, 28), datetime.date(2001, 3, 1), datetime.date(2001, 3, 2)],
     [datetime.date(2001, 6, 4)]]
    >>> def extract_number(s): return int(search(r'\d+', s).group(0))
    >>> c = ['file001.jpg', 'file002.jpg', 'file010.jpg']
    >>> [list(g) for k, g in groupby(c, adjacent_key(key=extract_number)]
    [['file001.jpg', 'file002.jpg'], 'file010.jpg']
    >>> d = [1, 1.01, 1.02, 10, 99, 100]
    >>> [list(g) for k, g in groupby(d, predicate=lambda x, y: 1/1.1 < x/y < 1.1))]
    [[1, 1.01, 1.02], [10], [99, 100]]
And let's make runs more flexible, while we're at it:
    def runs(iterable, *args, **kwargs):
        for k, g in itertools.groupby(iterable, adjacent_key(*args, **kwargs)):
            yield first_and_last(g)
A quick test:
    >>> list(runs(c, key=extract_number))
    [('file001.jpg', 'file002.jpg'), ('file010.jpg', 'file010.jpg')]
And we're done.
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.