The problem

Often, using an iterator lazily is better than generating a sequence (like the one you get from a list comprehension). For example, compare these two scripts:

The first one has to read the entire file into memory, which can be a problem for huge files, but usually you don't have files that big.

A more serious problem is that it has to read the entire file before it can do any work. Even with moderately-sized files, the corresponding delay at startup can make debugging more painful (each run takes seconds instead of milliseconds before you can see whether the first results are correct). And it can even have a significant performance cost (by preventing the file reads from interleaving with the network reads, which takes away caching opportunities from the OS/filesystem/drive).

The problem with the second one is that you can only iterate a file once. If you try to iterate it a second time, it'll be empty (because you've already iterated the whole thing). So, the following code gives you 2 instead of 4:

Of course little of this is specific to files. If you have an generator that requires a lot of CPU work to run, running it to completion before you get started causes the same startup delay, and can prevent pipelining of work, which can have a huge cost in CPU cache misses. But just leaving it as a generator means you can't iterate through it repeatedly, or you'll get nothing each time but the first.

The solution

So, is there a way to get an iterable that's lazy the first time you run through it like an iterator, but restartable like a sequence?

Well, you could build a complete "lazy sequence" class, but that's a lot of work, with some fiddly edge cases to deal with if in order to handle the full interface properly (including things like indexing and slicing with negative values).

Fortunately, you don't need the full interface. You need __next__ to store the values as you create them, and __iter__ to give you a new iterator that shares the same storage.

The easy way

As it turns out, that's exactly what itertools.tee does:
The problem is that it's tedious and error-prone to have to call tee explicitly each time you want to iterate. But you can easily wrap this up:
Now Reiterable is as easy to use as list, and gives you the benefit you care about (being able to iterate the values repeatedly) without the cost (iterating the entire thing up front).

Performance

The documentation shows you how tee is implemented. And if you don't understand it, it's probably worth copying the pure-Python implementation from the docs and stepping through what it does.

But the basic idea is this: Each time you call tee, it creates two new deques and two new generators, both tied to the original iterator. Whenever either generator needs a value that wasn't yet produced, it's taken from the original iterator and added to both deques, and then of course immediately popped from one.

So, the first time through Reiterable, it iterates the values on demand, and copies each value to the spare generator's deque. Each subsequent time, it's doing the same, but from an iterator over the spare deque instead of from the original iterator. So the values get moved from one deque to the next, with no wasted space and very little wasted time, right?

Well, not quite. This is hard to see with the C implementation of tee, or even the generator-based implementation given in the docs, but it you build a class-based implementation, you can see what's going on. Unfortunately, the class implementation seems to break the online interactive visualizer, so you'll need to copy the code below and run it locally:

    def tee(iterable, n=2):
        class gen(object):
            it = iter(iterable)
            deques = [collections.deque() for i in range(n)]
            def __init__(self, d):
                self.d = d
            def __iter__(self):
                return self
            def __next__(self):
                if not self.d:              # when the local deque is empty
                    newval = next(gen.it)      # fetch a new value and
                    for d in gen.deques:        # load it to all the deques
                        d.append(newval)
                return self.d.popleft()
        return tuple(gen(d) for d in gen.deques)

    class Reiterable(object):
        def __init__(self, iterable):
            self.iterable = iterable
        def __iter__(self):
            self.iterable, t = itertools.tee(self.iterable)
            return t

    f = io.StringIO('abc\ndef\n')
    f = Reiterable(f)
    for i in range(3):
        list(f)
    print(f.iterable.it.it.it)

Algorithmic analysis

Reiterable is building up a chain of tee objects. It is moving the values from one deque to the next, so all but the highest are empty, but there is still a deque and a tee wrapper object. Each value iterated is just moved from the highest deque on the chain to the new deque, so the wasted time per iteration step is minimal, but when you run out of values, it has to run through the whole chain to discover that they're all empty before the new iterator can be declared empty.

So, to iterate N items M times, instead of wasting N space to hold a copy of the iterable, you're wasting N+M space to hold a copy of the iterable and a chain of M empty tees and deques. And instead of NM time for the iteration, it's NM+M/2 time for the iteration plus the extra empty checks (which is still O(NM), of course)>

So, there's no algorithmic cost, except in edge cases when M >> N, which is a very strange use case. (If N is tiny, you really should just use a list; if M is gigantic, that almost always means you're doing a nested iteration that you can just flip over.)

Real-life performance

The real cost is the added overhead of having to go through the tee's generator for each value instead of just going through a list iterator. Which you can time pretty easily, so let's try it:

    In [66]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(f)
    In [67]: %timeit func()]
    10 loops, best of 3: 73.2 ms per loop

    In [68]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(list(f))
    In [68]: %timeit func()]
    10 loops, best of 3: 101 ms per loop

    In [69]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(Reiterable(f))
    In [70]: %timeit func()]
    10 loops, best of 3: 108 ms per loop

So, there is an additional performance cost to building a tee out of an iterator vs. building a list… but it's only about 25% higher.
2

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.