If you look at the major changes in Python 3, other than the Unicode stuff, most of them are about replacing list-based code with iterator-based code. When you run 2to3 on your code, every map(x, seq) or d.items() gets turned into list(map(x, seq)) or list(d.items()), and if you want to write dual-version code you have to use things like six.map(x, seq) or list(six.iteritems(d)).

So, why?

Often, you don't really need a list, you just need something you can iterate over. In the case of d.items(), the Py3 version can just use a view of the dictionary and not have to copy anything. In the case of map(), you do end up with the same amount of copying, but you don't have to build the list all at once, so there's less memory allocation going on—and, if you don't consume the whole list, you don't pay for the whole thing.

But that's not the point. The point is that if you combine iterator-transforming functions, it's semantically different from combining sequence-transforming functions. Your code ends up interleaved into a pipeline, working element-by-element. Some things that would be hard to express, or even impossible, now become trivial.

When Haskell programmers want to prove that their language is cooler than yours without using the word "monad" (which, they've learned, just gives people headaches and makes them stop listening), they use examples from math, and show how easy they are to transform into Haskell. Let's pick one of their favorites: Computing the first n primes. Here's how to do it:

numbers = itertools.count(2)
while True:
    prime = next(numbers)
    yield prime
    def relatively_prime(n, prime=prime):
        return n % prime
    numbers = filter(relatively_prime, numbers)

first_100_primes = itertools.islice(primes(), 100)

That's pretty close to the way you'd describe it mathematically. The problem with coding it this way is that you're starting with an infinite list, and at each step you're filtering it into another infinite list, which takes infinite time. The fact that you're later going to throw away all but the first 100 doesn't help.

But if you're dealing with iterators instead of lists, it _does_ help. You start with an infinite iterator. At each step, you're adding a finite filter to the future values of that iterator. So, when you generate the first 100 primes, you only end up stepping through the first 540 numbers in the infinite list, and applying a finite number of filters to each one.

Think about how you'd have to express that if you didn't have iterator-transforming pipelines. Something like this:

first_100_primes = []
hopefully_big_enough = 1000
hopefully_enough_numbers = range(2, hopefully_big_enough)
while len(first_100_primes) < 100:
    prime = hopefully_enough_numbers.pop()
    first_100_primes.append(prime)
    for i, n in hopefully_enough_numbers[:]:
        if n % prime == 0:
            del hopefully_enough_numbers[i]
    while not hopefully_enough_numbers:
        hopefully_enough_numbers = range(hopefully_big_enough, hopefully_big_enough + 1000)
        hopefully_big_enough += 1000
        for prime in first_100_primes:
            for i, n in hopefully_enough_numbers[:]:
                if n % prime == 0:
                    del hopefully_enough_numbers[i]

But obviously, you wouldn't do it like that; you'd manually simulate the infinite list, and manually pipeline all of the filters:

first_100_primes = []
next_number = 2
while len(first_100_primes) < 100:
    first_100_primes.append(next_number)
    def is_prime(n, primes):
        for prime in primes:
            if n % prime == 0:
                return False
        return True
    def get_next_relatively_prime(n, primes):
        while True:
            n += 1
            if is_prime(n, primes):
                return n
    next_number = get_next_relatively_prime(next_number, first_100_primes)

The Haskell-y version is obviously a lot easier to read. Plus, it lets you laugh at the smug Haskell types by proving that your language can do everything theirs can, at which point they have to resort to talking about the rich type system and you win, because of Guido's Law, which is like Godwin's Law, but with monads in place of Hitler. (I know, anyone with dual PhDs in type theory and history knows that Hitler is more like an applicative than a monad, but just as practicality beats purity, a heavy klomp beats a PhD.)

But Haskellites aren't the only smug types out there. Coroutines are cool too, and anyone who's still programming with those old-fashioned subroutines is so last century. Python generators give you all the benefits of coroutines, but with a syntax that means you don't have to realize it when it's not important.

And if you think about your iterator-transforming pipeline, that's what you're doing. At each step, each filter suspends and hands things off to the next filter until you get to the end of the chain and come back to the loop. You create a new coroutine and add it to that chain just by writing "numbers = filter(relatively_prime, numbers)". Who yields to who doesn't need to be specified—not because of some deep magic, but because it's obvious.

Of course we all know that generators aren't _really_ coroutines. But as of PEP 342 (Python 2.5, 2005), Python generators aren't just generators. Academics continue to refer to "generators" as "what Python had before 2.5", meaning you have to say "enhanced generators" or "PEP 342 generators" to mean what every Python user just calls "generators". And "enhanced generators" _are_ coroutines. But they _still_ have that nice sequential-subroutine-style syntax as generators. You still don't have to think about them as coroutines when it isn't relevant, but you can easily do so when it is. (The lack of "yield from" meant you sometimes didn't get that advantage, but that was fixed with PEP 380 in 3.3 in 2012.)

And that leads to even cooler things. The two obvious ways to write async I/O are with threads (green/micro or otherwise), which leads to synchronization problems they bring, or with callbacks, which forces you to think of the flow of control backward in complex cases, and leads to callback-hell even in simple cases. You can improve things a little by atomic data structures like queue.queue for all inter-thread communication, or by using promises (deferreds) for chaining callbacks, but still, you're adding a lot of mental burden on top of the equivalent synchronous code. But with explicit coroutines as in PEP 3156, you don't have any of that burden. The synchronous semantics translate directly to asynchronous semantics just by putting "yield from" at all suspension points, and you're done. The resulting code is clearly readable and easily writable.
2

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.