The problem

Often, using an iterator lazily is better than generating a sequence (like the one you get from a list comprehension). For example, compare these two scripts:

The first one has to read the entire file into memory, which can be a problem for huge files, but usually you don't have files that big.

A more serious problem is that it has to read the entire file before it can do any work. Even with moderately-sized files, the corresponding delay at startup can make debugging more painful (each run takes seconds instead of milliseconds before you can see whether the first results are correct). And it can even have a significant performance cost (by preventing the file reads from interleaving with the network reads, which takes away caching opportunities from the OS/filesystem/drive).

The problem with the second one is that you can only iterate a file once. If you try to iterate it a second time, it'll be empty (because you've already iterated the whole thing). So, the following code gives you 2 instead of 4:

Of course little of this is specific to files. If you have an generator that requires a lot of CPU work to run, running it to completion before you get started causes the same startup delay, and can prevent pipelining of work, which can have a huge cost in CPU cache misses. But just leaving it as a generator means you can't iterate through it repeatedly, or you'll get nothing each time but the first.

The solution

So, is there a way to get an iterable that's lazy the first time you run through it like an iterator, but restartable like a sequence?

Well, you could build a complete "lazy sequence" class, but that's a lot of work, with some fiddly edge cases to deal with if in order to handle the full interface properly (including things like indexing and slicing with negative values).

Fortunately, you don't need the full interface. You need __next__ to store the values as you create them, and __iter__ to give you a new iterator that shares the same storage.

The easy way

As it turns out, that's exactly what itertools.tee does:
The problem is that it's tedious and error-prone to have to call tee explicitly each time you want to iterate. But you can easily wrap this up:
Now Reiterable is as easy to use as list, and gives you the benefit you care about (being able to iterate the values repeatedly) without the cost (iterating the entire thing up front).

Performance

The documentation shows you how tee is implemented. And if you don't understand it, it's probably worth copying the pure-Python implementation from the docs and stepping through what it does.

But the basic idea is this: Each time you call tee, it creates two new deques and two new generators, both tied to the original iterator. Whenever either generator needs a value that wasn't yet produced, it's taken from the original iterator and added to both deques, and then of course immediately popped from one.

So, the first time through Reiterable, it iterates the values on demand, and copies each value to the spare generator's deque. Each subsequent time, it's doing the same, but from an iterator over the spare deque instead of from the original iterator. So the values get moved from one deque to the next, with no wasted space and very little wasted time, right?

Well, not quite. This is hard to see with the C implementation of tee, or even the generator-based implementation given in the docs, but it you build a class-based implementation, you can see what's going on. Unfortunately, the class implementation seems to break the online interactive visualizer, so you'll need to copy the code below and run it locally:

    def tee(iterable, n=2):
        class gen(object):
            it = iter(iterable)
            deques = [collections.deque() for i in range(n)]
            def __init__(self, d):
                self.d = d
            def __iter__(self):
                return self
            def __next__(self):
                if not self.d:              # when the local deque is empty
                    newval = next(gen.it)      # fetch a new value and
                    for d in gen.deques:        # load it to all the deques
                        d.append(newval)
                return self.d.popleft()
        return tuple(gen(d) for d in gen.deques)

    class Reiterable(object):
        def __init__(self, iterable):
            self.iterable = iterable
        def __iter__(self):
            self.iterable, t = itertools.tee(self.iterable)
            return t

    f = io.StringIO('abc\ndef\n')
    f = Reiterable(f)
    for i in range(3):
        list(f)
    print(f.iterable.it.it.it)

Algorithmic analysis

Reiterable is building up a chain of tee objects. It is moving the values from one deque to the next, so all but the highest are empty, but there is still a deque and a tee wrapper object. Each value iterated is just moved from the highest deque on the chain to the new deque, so the wasted time per iteration step is minimal, but when you run out of values, it has to run through the whole chain to discover that they're all empty before the new iterator can be declared empty.

So, to iterate N items M times, instead of wasting N space to hold a copy of the iterable, you're wasting N+M space to hold a copy of the iterable and a chain of M empty tees and deques. And instead of NM time for the iteration, it's NM+M/2 time for the iteration plus the extra empty checks (which is still O(NM), of course)>

So, there's no algorithmic cost, except in edge cases when M >> N, which is a very strange use case. (If N is tiny, you really should just use a list; if M is gigantic, that almost always means you're doing a nested iteration that you can just flip over.)

Real-life performance

The real cost is the added overhead of having to go through the tee's generator for each value instead of just going through a list iterator. Which you can time pretty easily, so let's try it:

    In [66]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(f)
    In [67]: %timeit func()]
    10 loops, best of 3: 73.2 ms per loop

    In [68]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(list(f))
    In [68]: %timeit func()]
    10 loops, best of 3: 101 ms per loop

    In [69]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(Reiterable(f))
    In [70]: %timeit func()]
    10 loops, best of 3: 108 ms per loop

So, there is an additional performance cost to building a tee out of an iterator vs. building a list… but it's only about 25% higher.
2

View comments

  1. Here's what I use:

    class Tee(Iterable[T]): # Allows for an indefinite number of tees
    iterator: Iterable[T]
    previous: MutableSequence[T]

    def __init__(self, iterable: Iterable[T]):
    self.iterator = iter(iterable)
    self.previous = []

    def __iter__(self) -> '_TeeIterator[T]':
    return _TeeIterator(self)


    class _TeeIterator(Iterator[T]):
    tee: Tee[T]
    i: int

    def __init__(self, tee: Tee[T]):
    self.tee = tee
    self.i = 0

    def __iter__(self) -> '_TeeIterator[T]':
    return self

    def __next__(self) -> T:
    try:
    return self.tee.previous[self.i]
    except IndexError:
    self.tee.previous.append(next(self.tee.iterable))
    return self.tee.previous[self.i]
    finally:
    self.i += 1

    ReplyDelete
It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.
I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I
Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").
Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).
If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.
Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'
Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.
One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall
A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.
Many languages have a for-each loop.
When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.
In a previous post, I explained in detail how lookup works in Python.
The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.
In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.
I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.
In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.
Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.
About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.
About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.
In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.
There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.
Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.
There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.
When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.
Let's say you have a good idea for a change to Python.
There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.
In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.
Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.
If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.
Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).
There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.
Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.
Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.
Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.
This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.