There's a pair of intertwined threads in python-ideas about coming up with a way to break out of list comprehensions and generator expressions early.

Oscar Benjamin gives a good example:

    isprime = all(n % p for p in takewhile(lambda p: p**2 < n, primes_seen))

That's ugly. What you really want is to be able to write something like:

    isprime = all(n % p for p in primes_seen while p ** 2 < n)

This is exactly the same as replacing filter calls with if clauses, so it ought to be solvable, and worth solving, right?

But there are serious problems with a while clause that an if clause doesn't have. And, as the thread demonstrates, nobody's been able to come up with a better syntax that doesn't have at least as many problems. Also, as Nick Coghlan pointed out earlier in the thread, there's probably a reason that filter is a builtin an takewhile is not. So, the cost is higher, and, while the benefit is equivalent, it's for a much less common case.

I think it's worth backing up to see why we have this problem.

Ultimately, the problem—if there is one—is that Python got comprehensions "wrong". But I'm not sure it's actually wrong at all.

Incomprehensibility

Python borrowed comprehensions directly from Haskell. A comprehension is a sequence of for and if clauses, which are interpreted as the equivalent nested statements. In other words, this:

    (line for file in files for line in file if not line.startswith('#'))

… means:

    def lines(files):
        for file in files:
            for line in file:
                if not line.startswith('#'):
                    yield line

And this is exactly why we can't add a while clause: you can't map it to a nested while statement. For example, imagine you want to parse the RFC822 headers from a list of files:

    (line for file in files for line in file while line.strip())

    def lines(files):
        for file in files:
            for line in file:
                while line.strip():
                    yield line

While that's legal code, it doesn't mean what you want—it means an infinite sequence of the first line in the first file.

And the problem is the same for every other syntax everyone's come up with. In particular, anything based on break isn't going to work because break doesn't nest.

You could invent an until statement that can be nested:

    (line for file in files for line in file until not line.strip())

    def lines(files):
        for file in files:
            for line in file:
                until not line.strip():
                    yield line

… but that's a statement nobody's ever going to use.

You can read the thread for all of the different alternatives proposed, but they all have the same problem.

But really, you don't care about nesting if statements. After all, there's no reason to ever write the first statement below given that it's always equivalent to, and less readable than, the second:

    if foo:
        if bar:

    if foo and bar:

And the same is going to be true for while/break/until/whatever. In fact, you explicitly don't want it nested.

And that leads to a different way to design comprehensions: A sequence of nested for clauses, each of which can have exactly one optional if condition attached—defined as, in effect, applying a filter. To make things clearer, we might even want to add punctuation. After all, when you translate a non-trivial comprehension into English, you definitely use punctuation. For example:

    (line for file in files, for line in file if not line.startswith('#'))

That design makes it easy to attach a new kind of condition onto each nested for loop: After the optional if condition, there's an optional while condition, defined as applying a takewhile. So:

    (line for file in files, for line in file while line.strip())

Most Lisp-family languages that have borrowed comprehensions do something similar to this (e.g., Clojure, Racket), and most of them have while conditions. But I can't think of a single language that does Haskell-style comprehensions that has a while condition. And I don't think that's an accident.

Lambda the Ultimate Pain in the Neck

But if we borrowed comprehensions straight out of Haskell, how could we have gotten them wrong? Haskell doesn't have this problem, does it?

Well, in Haskell, the right answer is to use takewhile—but the reason that works is that it's not nearly as painful as in Python.

The problem with takewhile is that you need to turn your expression into a function. For a trivial expression, wrapping it in a lambda (or an out-of-line def) means there's more boilerplate noise than actual functionality.

Going back to Oscar Benjamin's example:

    lambda p: p**2 < n

Besides being almost twice as long, it also requires you to create a new p argument just to pass the existing p value to—and it's confusing if you name them both p, and even more confusing if you don't.

But Haskell's lambdas are no better than Python's:

    \p -> p**2 < n

Sure, they're a bit briefer, but that doesn't make them more readable.

The reason this works in Haskell is that you usually don't need a lambda at all. For example, these are equivalent functions:

    \p -> 2**p
    (**) 2

And so are these:

    \i -> n < i
    (<) n

The Python equivalents are:

    partial(pow, 2)
    partial(lt, n)

Partly this is because Haskell lets you access any infix operator as a prefix function just by wrapping it in parens—but the real key is that functions are always curried, which means you get automatic partials. It's not the "pow" and "lt" that are a problem there, it's the "partial".

And combining those two Haskell features means that these are also equivalent:

    \p -> p**2
    (** 2)

    \i -> i < n
    (< n)

There's no way to do that in Python without a function to flip the args around:

    def flip(f):
        def flipped(x, y):
            return f(y, x)
        return flipped

    partial(flip(pow), 2)
    partial(flip(lt), n)

Meanwhile, Haskell makes it trivial to compose functions. There are even better alternatives, but here's the basic one that you know by the end of the first chapter of any tutorial:

    (< n) . (** 2)

What would that look like in Python?

    def flip(f):
        def flipped(x, y):
            return f(y, x)
        return flipped

    def compose(f, g):
        def composed(x):
            return f(g(x))
        return composed

    compose(partial(flip(lt), n), partial(flip(pow), 2))

Compare that to the lambda we were hoping to eliminate:

    lambda p: p**2 < n

It's pretty obvious that this is not an improvement!

Programming in Haskell is all about building functions out of functions. Programming in Python is not. This certainly doesn't mean Haskell is a better language, it just means that Haskell-style comprehensions are more useful in Haskell than in Python.

Is there another way around the problem?

C++

C++03 had an even worse problem with lambda than Python: it didn't have lambda at all. This meant that many of the powerful tools in the <algorithms> library—equivalents to map, filter, etc.—were rarely used, and especially not in simple cases.

For example, if you want to turn an explicit loop like this:

    template <typename In, typename Out>
    void double(In first, In last, Out out) {
        for (In it=first; it !=last; ++it)
            *out++ = *it + *it;
    }

… into a transform call, it looks something like this:

    template <typename T>
    struct doubler {
        T operator()(T i) const { return i + i; }
    };

    template <typename In, typename Out%gt;
    void double(In first, In last, Out out) {
        transform(first, last, out, doubler());
    }

Nobody's going to use that unless the logic is pretty complex. Which means many C++ programmers never even learn it, so they don't use it even when the logic is complex enough to be worth it.

In C++11, you can write it with a lambda:

    template <typename In, typename Out>
    void double(In first, In last, Out out) {
        transform(first, last, out, [](auto i) { return i+i; });
    }

… which gets you to about the same place as Python.

But before C++11, there were a number of attempts to build lambda purely out of libraries. Most of them were expression tree libraries, like Boost.Lambda, which let you write things like this:

    template <typename In, typename Out>
    void double(In first, In last, Out out) {
        transform(first, last, out, _1 + _1);
    }

Now that's more like it. It's not beautiful—it requires magic variables, and it's not at all clear that "_1 + _1" is actually a function—but who ever expects C++ to be beautiful? More seriously, it ends up being much harder for the compiler to deal with, which means at best your compilation takes 5x longer; at worst your actual code also runs slower (because things didn't get inlined and optimized ideally), and people do expect C++ to be fast…

Anyway, if you had this in Python, you could just write this:

    out = map(_1 + _1, in)

And can we build the same thing in Python? Sure. An expression tree is just a callable object that overrides all of the usual operators to return new callable objects.

Here's a quick hack:
    class Placeholder(object):
        def __init__(self, func=lambda arg: arg):
            self.func = func
        def __call__(self, arg):
            return self.func(arg)
        def __add__(self, addend):
            return Placeholder(lambda arg: self.func(arg) + addend)
        def __radd__(self, addend):
            return Placeholder(lambda arg: addend + self.func(arg))
        # etc.

    _1 = Placeholder()

    numbers = map(_1 + 1, range(5))
    print(list(numbers))

Again, this is a quick hack—it won't work with _1 + _1, or a _2, and it's missing a way to handle method calls (which you can take care of with __getattr__) and normal functions (which you need some kind of wrapper for)… but this is enough to show the idea. In fact, it's enough to solve our initial problem:

    isprime = all(n % p for p in takewhile(_1 ** 2 < n, primes_seen))

In fact, with some nasty _getframe hackery, you might be able to make this even nicer. (Note that sys._getframe().f_code.co_varnames[1] is 'p'…)

But is this really the answer we want?

Summary

It's obviously far too late to change Python from Haskell-style comprehensions to Clojure-style comprehensions.

There's no way Python will have all the Haskell features that make takewhile good enough as-is, because they would be bad features for a language like Python.

And expression trees don't seem like the right answer.

So, what is the right answer?

Sometimes, the right answer is to just define the function out-of-line:

    def is_candidate(p):
        return p**2 < n
    isprime = all(n % p for p in takewhile(is_candidate, primes_seen))

And PEP 403 might help here:

    @in isprime = all(n % p for p in takewhile(is_candidate, primes_seen))
    def is_candidate(p):
        return p**2 < n

But more often, I think, you just want to split the expression up:

    candidate_primes = takewhile(lambda p: p**2 < n, primes_seen)
    isprime = all(n % p for p in candidate_primes)

Either way, can you really argue that this is less readable than the alternatives available in other languages?

Many experienced programmers—especially those who came from C-family languages or Python 2.x—instinctively rebel against this, because they think there's a cost to storing that intermediate value. But there isn't. That's the beauty of iterators and generators. You're doing the exact same sequence of operations no matter how many times you split it up. And this means that you can mix and match the tricks of imperative-style programming and all the tricks of functional-declarative-style programming and use whatever reads the best.

If you're feeling jealous of other languages, consider that Haskell requires you to write imperative-style code inside a monad with completely different syntax, while C++ requires you to write functional-style code in a completely different language that runs at compile time, only operates on integers and types, and has possibly the worst syntax anyone has ever come up with for a Turing-complete language without intentionally trying.
4

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.