The problem

Often, using an iterator lazily is better than generating a sequence (like the one you get from a list comprehension). For example, compare these two scripts:

The first one has to read the entire file into memory, which can be a problem for huge files, but usually you don't have files that big.

A more serious problem is that it has to read the entire file before it can do any work. Even with moderately-sized files, the corresponding delay at startup can make debugging more painful (each run takes seconds instead of milliseconds before you can see whether the first results are correct). And it can even have a significant performance cost (by preventing the file reads from interleaving with the network reads, which takes away caching opportunities from the OS/filesystem/drive).

The problem with the second one is that you can only iterate a file once. If you try to iterate it a second time, it'll be empty (because you've already iterated the whole thing). So, the following code gives you 2 instead of 4:

Of course little of this is specific to files. If you have an generator that requires a lot of CPU work to run, running it to completion before you get started causes the same startup delay, and can prevent pipelining of work, which can have a huge cost in CPU cache misses. But just leaving it as a generator means you can't iterate through it repeatedly, or you'll get nothing each time but the first.

The solution

So, is there a way to get an iterable that's lazy the first time you run through it like an iterator, but restartable like a sequence?

Well, you could build a complete "lazy sequence" class, but that's a lot of work, with some fiddly edge cases to deal with if in order to handle the full interface properly (including things like indexing and slicing with negative values).

Fortunately, you don't need the full interface. You need __next__ to store the values as you create them, and __iter__ to give you a new iterator that shares the same storage.

The easy way

As it turns out, that's exactly what itertools.tee does:
The problem is that it's tedious and error-prone to have to call tee explicitly each time you want to iterate. But you can easily wrap this up:
Now Reiterable is as easy to use as list, and gives you the benefit you care about (being able to iterate the values repeatedly) without the cost (iterating the entire thing up front).

Performance

The documentation shows you how tee is implemented. And if you don't understand it, it's probably worth copying the pure-Python implementation from the docs and stepping through what it does.

But the basic idea is this: Each time you call tee, it creates two new deques and two new generators, both tied to the original iterator. Whenever either generator needs a value that wasn't yet produced, it's taken from the original iterator and added to both deques, and then of course immediately popped from one.

So, the first time through Reiterable, it iterates the values on demand, and copies each value to the spare generator's deque. Each subsequent time, it's doing the same, but from an iterator over the spare deque instead of from the original iterator. So the values get moved from one deque to the next, with no wasted space and very little wasted time, right?

Well, not quite. This is hard to see with the C implementation of tee, or even the generator-based implementation given in the docs, but it you build a class-based implementation, you can see what's going on. Unfortunately, the class implementation seems to break the online interactive visualizer, so you'll need to copy the code below and run it locally:

    def tee(iterable, n=2):
        class gen(object):
            it = iter(iterable)
            deques = [collections.deque() for i in range(n)]
            def __init__(self, d):
                self.d = d
            def __iter__(self):
                return self
            def __next__(self):
                if not self.d:              # when the local deque is empty
                    newval = next(gen.it)      # fetch a new value and
                    for d in gen.deques:        # load it to all the deques
                        d.append(newval)
                return self.d.popleft()
        return tuple(gen(d) for d in gen.deques)

    class Reiterable(object):
        def __init__(self, iterable):
            self.iterable = iterable
        def __iter__(self):
            self.iterable, t = itertools.tee(self.iterable)
            return t

    f = io.StringIO('abc\ndef\n')
    f = Reiterable(f)
    for i in range(3):
        list(f)
    print(f.iterable.it.it.it)

Algorithmic analysis

Reiterable is building up a chain of tee objects. It is moving the values from one deque to the next, so all but the highest are empty, but there is still a deque and a tee wrapper object. Each value iterated is just moved from the highest deque on the chain to the new deque, so the wasted time per iteration step is minimal, but when you run out of values, it has to run through the whole chain to discover that they're all empty before the new iterator can be declared empty.

So, to iterate N items M times, instead of wasting N space to hold a copy of the iterable, you're wasting N+M space to hold a copy of the iterable and a chain of M empty tees and deques. And instead of NM time for the iteration, it's NM+M/2 time for the iteration plus the extra empty checks (which is still O(NM), of course)>

So, there's no algorithmic cost, except in edge cases when M >> N, which is a very strange use case. (If N is tiny, you really should just use a list; if M is gigantic, that almost always means you're doing a nested iteration that you can just flip over.)

Real-life performance

The real cost is the added overhead of having to go through the tee's generator for each value instead of just going through a list iterator. Which you can time pretty easily, so let's try it:

    In [66]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(f)
    In [67]: %timeit func()]
    10 loops, best of 3: 73.2 ms per loop

    In [68]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(list(f))
    In [68]: %timeit func()]
    10 loops, best of 3: 101 ms per loop

    In [69]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(Reiterable(f))
    In [70]: %timeit func()]
    10 loops, best of 3: 108 ms per loop

So, there is an additional performance cost to building a tee out of an iterator vs. building a list… but it's only about 25% higher.
2

View comments

  1. Here's what I use:

    class Tee(Iterable[T]): # Allows for an indefinite number of tees
    iterator: Iterable[T]
    previous: MutableSequence[T]

    def __init__(self, iterable: Iterable[T]):
    self.iterator = iter(iterable)
    self.previous = []

    def __iter__(self) -> '_TeeIterator[T]':
    return _TeeIterator(self)


    class _TeeIterator(Iterator[T]):
    tee: Tee[T]
    i: int

    def __init__(self, tee: Tee[T]):
    self.tee = tee
    self.i = 0

    def __iter__(self) -> '_TeeIterator[T]':
    return self

    def __next__(self) -> T:
    try:
    return self.tee.previous[self.i]
    except IndexError:
    self.tee.previous.append(next(self.tee.iterable))
    return self.tee.previous[self.i]
    finally:
    self.i += 1

    ReplyDelete

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.