Lazy tuple unpacking

In a recent discussion on python-ideas, Paul Tagliamonte suggested that tuple unpacking could be lazy, using iterators. But it was immediately pointed out that this would break lots of code:

    a, b, *c, d, e = range(10000000000)

If c were an iterator, you'd have to exhaust c to get to d and e.

One way to solve this would be to try to slice the iterable:

    try:
        a, b, c, d, e = i[0], i[1], i[2:-2], i[-2], i[-1]
    except TypeError:
        # current behavior

Then c ends up as a range object, which is better than either an iterator or a list.

With most types, of course, you'd get a list, because most types don't have lazy slices, but that's no worse than today. And range is by no means the only type that does; lazy slices are critical to NumPy.

So, why doesn't Python do this?

That's actually a good question. I'm sure the answer would be more obvious if it weren't 4am, and I'll be sure and edit this section to make myself look less stupid when I think of it. :)

Smarter lazy tuple unpacking through sequence views

In Swift-style map and filter views, I looked at Swift's sequence types (Swift calls them collections, but let's stick to Python terms to avoid confusion), and pointed out that they can do things that Python's can't.

And if you take things a bit farther, what you get is exactly what Swift has. In Swift, some functions (although not at all consistently, unfortunately) that return iterators in Python return lazy sequences in Swift that are views on the original sequence(s) if possible, falling back to iterators if not.

For example, map acts as if written like this:

    class MapView(collections.abc.Sequence):
        def __init__(self, func, sequence):
            self.func, self.sequence = func, sequence
        def __getitem__(self, index):
            if isinstance(index, slice):
                # do slice stuff
            else:
                return self.func(self.iterable[index])

    def map(func, sequence):
        if isinstance(sequence, collections.abc.Sequence):
            return MapView(func, sequence)
        else:
            return (func(item) for item in sequence)

And now, you can do this:

    a, b, *c, d, e = map(lambda i: i*2, range(10000000000))

… and c is a MapView object over a range object.

And Swift takes this even further by adding bidirectional sequences—sequences that can't be indexed by integers, but can be indexed by a special kind of index that knows how to get the next or previous value.

BidirectionalSequence

Imagine these two types added to collections.abc:

    class BidirectionalIndex:
        @abc.abstractmethod
        def __succ__(self): pass
        @abc.abstractmethod
        def __pred__(self): pass

    class BidirectionalSequence(Iterable):
        @abc.abstractmethod
        def __begin__(self): pass
        @abc.abstractmethod
        def __end__(self): pass
        @abc.abstractmethod
        def __getitem__(self, index): pass

    class Sequence(Sized, BidirectionalSequence, Container):
        # existing stuff

A BidirectionalSequence is expected to return a BidirectionalIndex from __begin__ and __end__, and to accept one, or a slice of them, in __getitem__, and return a TypeError if you try to pass an int. A Sequence has the same methods but with something like an integer, as usual. (This means that int, and maybe the ABCs in numbers as well, has to implement __succ__ and __pred__.)

Now we could implement a, b, *c, d, e = i like this:

    try:
        a, b, c, d, e = i[0], i[1], i[2:-2], i[-2], i[-1]
    except TypeError:
        try:
            begin, end = i.__begin__(), i.__end__()
        except AttributeError:
            # current behavior
        else:
            a = i[begin]
            begin = begin.succ()
            b = i[begin]
            begin = begin.succ()
            end = end.pred()
            e = i[end]
            end = end.pred()
            d = i[end]
            c = i[begin:end]

So, why would you want this? Well, just as map can return a lazy Sequence if given a Sequence, filter can return a lazy BidirectionalSequence if given a BidirectionalSequence. (Also, map can return a lazy BidirectionalSequence if given a BidirectionalSequence.) So you can do this:

    a, b, *c, d, e = filter(lambda i: i*2, range(10000000000))

And this means generator expressions (or some new type of comprehension?) could similarly return the best possible type: a Sequence if given a Sequence and there's no if clauses, otherwise a BidirectionalSequence if given a BidirectionalSequence, otherwise an Iterator.

While we're at it, we could also provide a ForwardSequence, which is effectively usable in exactly the same cases as an Iterable that's not an Iterator, but provides an API consistent with BidirectionalSequence.

Reversible iterables

But that last point brings up a different possible way to get the same thing: reversible iterators.

At first glance, it seems like all you need is a new Iterable subclass (that Sequence inherits) that adds a __reviter__ method. But then you also need some way to compare iterators, to see if they're pointing at the same thing. (It's worth noting that C++ iterators and Swift indexes can do that… but they're not quite the same thing as Python iterators; Swift generators, which are exactly the same thing as Python iterators, cannot.) And the code to actually use these things would be pretty complicated. For example, the unpacking would work like this:

    try:
        a, b, c, d, e = i[0], i[1], i[2:-2], i[-2], i[-1]
    except TypeError:
        try:
            rit = reviter(i)
        except TypeError:
            # current behavior
        else:
            it = iter(i)
            if it == rit: raise ValueError('Not enough values')
            a = next(it)
            if it == rit: raise ValueError('Not enough values')
            e = next(rit)
            if it == rit: raise ValueError('Not enough values')
            b = next(it)
            if it == rit: raise ValueError('Not enough values')
            d = next(rit)
            c = make_iterator_between(it, rit)

It should be pretty obvious how that make_iterator_between is implemented.

Double-ended iterators

But once you think about how that make_iterator_between works, you could just make it a double-ended iterator, with __next__ and __last__ methods. Since you've always got just a single object, you don't need that iterator equality comparison. And it's a lot easier to use. The unpacking would look like this:

    try:
        a, b, c, d, e = i[0], i[1], i[2:-2], i[-2], i[-1]
    except TypeError:
        it = iter(i)
        a = next(it)
        b = next(it)
        try:
            e = last(it)
        except TypeError:
            # existing behavior
        else:
            d = last(it)
            c = it

Summary

So, is any version of this a reasonable suggestion for Python?

Maybe the first one, but the later ones, not at all.

Python hopped on the iterator train years ago, and 3.0's conversion of map and friends to iterators completed the transition. Making another transition to different kinds of lazy sequences at this point would be insane, unless the benefit was absolutely gigantic.

Reversible iterators are a much less radical change, but they're also a big pain to use, and a bit of a pain to implement.

Double-ended iterators are an even less radical change, and simpler both to use and to implement… but they're not a very coherent concept to explain. An iterator today is pretty simple to understand: It keeps track of a current location within some (possibly virtual) iterable. An iterator that can also go backward, fine. But an iterator that keeps track of a pair of locations and can only move inward? That's an odd thing.

In a new language, it might be another story. In fact, if you got lazy sequences right, iterators would be something that users rarely have to, or want to, look at. (Which makes the interface simpler, too—map doesn't have to worry about fallback behavior when called on an iterator, just don't let it be called on anything but a sequence.) Which raises the question of why Swift added iterables in the first place. (It's not because they couldn't think of how else generators and comprehensions could work, because Swift has neither… nor does it have extended unpacking, for that matter.)
3

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.