The problem

Often, using an iterator lazily is better than generating a sequence (like the one you get from a list comprehension). For example, compare these two scripts:

The first one has to read the entire file into memory, which can be a problem for huge files, but usually you don't have files that big.

A more serious problem is that it has to read the entire file before it can do any work. Even with moderately-sized files, the corresponding delay at startup can make debugging more painful (each run takes seconds instead of milliseconds before you can see whether the first results are correct). And it can even have a significant performance cost (by preventing the file reads from interleaving with the network reads, which takes away caching opportunities from the OS/filesystem/drive).

The problem with the second one is that you can only iterate a file once. If you try to iterate it a second time, it'll be empty (because you've already iterated the whole thing). So, the following code gives you 2 instead of 4:

Of course little of this is specific to files. If you have an generator that requires a lot of CPU work to run, running it to completion before you get started causes the same startup delay, and can prevent pipelining of work, which can have a huge cost in CPU cache misses. But just leaving it as a generator means you can't iterate through it repeatedly, or you'll get nothing each time but the first.

The solution

So, is there a way to get an iterable that's lazy the first time you run through it like an iterator, but restartable like a sequence?

Well, you could build a complete "lazy sequence" class, but that's a lot of work, with some fiddly edge cases to deal with if in order to handle the full interface properly (including things like indexing and slicing with negative values).

Fortunately, you don't need the full interface. You need __next__ to store the values as you create them, and __iter__ to give you a new iterator that shares the same storage.

The easy way

As it turns out, that's exactly what itertools.tee does:
The problem is that it's tedious and error-prone to have to call tee explicitly each time you want to iterate. But you can easily wrap this up:
Now Reiterable is as easy to use as list, and gives you the benefit you care about (being able to iterate the values repeatedly) without the cost (iterating the entire thing up front).

Performance

The documentation shows you how tee is implemented. And if you don't understand it, it's probably worth copying the pure-Python implementation from the docs and stepping through what it does.

But the basic idea is this: Each time you call tee, it creates two new deques and two new generators, both tied to the original iterator. Whenever either generator needs a value that wasn't yet produced, it's taken from the original iterator and added to both deques, and then of course immediately popped from one.

So, the first time through Reiterable, it iterates the values on demand, and copies each value to the spare generator's deque. Each subsequent time, it's doing the same, but from an iterator over the spare deque instead of from the original iterator. So the values get moved from one deque to the next, with no wasted space and very little wasted time, right?

Well, not quite. This is hard to see with the C implementation of tee, or even the generator-based implementation given in the docs, but it you build a class-based implementation, you can see what's going on. Unfortunately, the class implementation seems to break the online interactive visualizer, so you'll need to copy the code below and run it locally:

    def tee(iterable, n=2):
        class gen(object):
            it = iter(iterable)
            deques = [collections.deque() for i in range(n)]
            def __init__(self, d):
                self.d = d
            def __iter__(self):
                return self
            def __next__(self):
                if not self.d:              # when the local deque is empty
                    newval = next(gen.it)      # fetch a new value and
                    for d in gen.deques:        # load it to all the deques
                        d.append(newval)
                return self.d.popleft()
        return tuple(gen(d) for d in gen.deques)

    class Reiterable(object):
        def __init__(self, iterable):
            self.iterable = iterable
        def __iter__(self):
            self.iterable, t = itertools.tee(self.iterable)
            return t

    f = io.StringIO('abc\ndef\n')
    f = Reiterable(f)
    for i in range(3):
        list(f)
    print(f.iterable.it.it.it)

Algorithmic analysis

Reiterable is building up a chain of tee objects. It is moving the values from one deque to the next, so all but the highest are empty, but there is still a deque and a tee wrapper object. Each value iterated is just moved from the highest deque on the chain to the new deque, so the wasted time per iteration step is minimal, but when you run out of values, it has to run through the whole chain to discover that they're all empty before the new iterator can be declared empty.

So, to iterate N items M times, instead of wasting N space to hold a copy of the iterable, you're wasting N+M space to hold a copy of the iterable and a chain of M empty tees and deques. And instead of NM time for the iteration, it's NM+M/2 time for the iteration plus the extra empty checks (which is still O(NM), of course)>

So, there's no algorithmic cost, except in edge cases when M >> N, which is a very strange use case. (If N is tiny, you really should just use a list; if M is gigantic, that almost always means you're doing a nested iteration that you can just flip over.)

Real-life performance

The real cost is the added overhead of having to go through the tee's generator for each value instead of just going through a list iterator. Which you can time pretty easily, so let's try it:

    In [66]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(f)
    In [67]: %timeit func()]
    10 loops, best of 3: 73.2 ms per loop

    In [68]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(list(f))
    In [68]: %timeit func()]
    10 loops, best of 3: 101 ms per loop

    In [69]: def func():
        ...      f = (i for i in range(1000000))
        ...      sum(Reiterable(f))
    In [70]: %timeit func()]
    10 loops, best of 3: 108 ms per loop

So, there is an additional performance cost to building a tee out of an iterator vs. building a list… but it's only about 25% higher.
2

View comments

  1. Here's what I use:

    class Tee(Iterable[T]): # Allows for an indefinite number of tees
    iterator: Iterable[T]
    previous: MutableSequence[T]

    def __init__(self, iterable: Iterable[T]):
    self.iterator = iter(iterable)
    self.previous = []

    def __iter__(self) -> '_TeeIterator[T]':
    return _TeeIterator(self)


    class _TeeIterator(Iterator[T]):
    tee: Tee[T]
    i: int

    def __init__(self, tee: Tee[T]):
    self.tee = tee
    self.i = 0

    def __iter__(self) -> '_TeeIterator[T]':
    return self

    def __next__(self) -> T:
    try:
    return self.tee.previous[self.i]
    except IndexError:
    self.tee.previous.append(next(self.tee.iterable))
    return self.tee.previous[self.i]
    finally:
    self.i += 1

    ReplyDelete
Hybrid Programming
Hybrid Programming
5
Greenlets vs. explicit coroutines
Greenlets vs. explicit coroutines
6
ABCs: What are they good for?
ABCs: What are they good for?
1
A standard assembly format for Python bytecode
A standard assembly format for Python bytecode
6
Unified call syntax
Unified call syntax
8
Why heapq isn't a type
Why heapq isn't a type
1
Unpacked Bytecode
Unpacked Bytecode
3
Everything is dynamic
Everything is dynamic
1
Wordcode
Wordcode
1
For-each loops should define a new variable
For-each loops should define a new variable
4
Views instead of iterators
Views instead of iterators
2
How lookup _could_ work
How lookup _could_ work
2
How lookup works
How lookup works
7
How functions work
How functions work
2
Why you can't have exact decimal math
Why you can't have exact decimal math
2
Can you customize method resolution order?
Can you customize method resolution order?
1
Prototype inheritance is inheritance
Prototype inheritance is inheritance
1
Pattern matching again
Pattern matching again
The best collections library design?
The best collections library design?
1
Leaks into the Enclosing Scope
Leaks into the Enclosing Scope
2
Iterable Terminology
Iterable Terminology
8
Creating a new sequence type is easy
Creating a new sequence type is easy
2
Going faster with NumPy
Going faster with NumPy
2
Why isn't asyncio too slow?
Why isn't asyncio too slow?
Hacking Python without hacking Python
Hacking Python without hacking Python
1
How to detect a valid integer literal
How to detect a valid integer literal
2
Operator sectioning for Python
Operator sectioning for Python
1
If you don't like exceptions, you don't like Python
If you don't like exceptions, you don't like Python
2
Spam, spam, spam, gouda, spam, and tulips
Spam, spam, spam, gouda, spam, and tulips
And now for something completely stupid…
And now for something completely stupid…
How not to overuse lambda
How not to overuse lambda
1
Why following idioms matters
Why following idioms matters
1
Cloning generators
Cloning generators
5
What belongs in the stdlib?
What belongs in the stdlib?
3
Augmented Assignments (a += b)
Augmented Assignments (a += b)
11
Statements and Expressions
Statements and Expressions
3
An Abbreviated Table of binary64 Values
An Abbreviated Table of binary64 Values
1
IEEE Floats and Python
IEEE Floats and Python
Subtyping and Ducks
Subtyping and Ducks
1
Greenlets, threads, and processes
Greenlets, threads, and processes
6
Why don't you want getters and setters?
Why don't you want getters and setters?
8
The (Updated) Truth About Unicode in Python
The (Updated) Truth About Unicode in Python
1
How do I make a recursive function iterative?
How do I make a recursive function iterative?
1
Sockets and multiprocessing
Sockets and multiprocessing
Micro-optimization and Python
Micro-optimization and Python
3
Why does my 100MB file take 1GB of memory?
Why does my 100MB file take 1GB of memory?
1
How to edit a file in-place
How to edit a file in-place
ADTs for Python
ADTs for Python
5
A pattern-matching case statement for Python
A pattern-matching case statement for Python
2
How strongly typed is Python?
How strongly typed is Python?
How do comprehensions work?
How do comprehensions work?
1
Reverse dictionary lookup and more, on beyond z
Reverse dictionary lookup and more, on beyond z
2
How to handle exceptions
How to handle exceptions
2
Three ways to read files
Three ways to read files
2
Lazy Python lists
Lazy Python lists
2
Lazy cons lists
Lazy cons lists
1
Lazy tuple unpacking
Lazy tuple unpacking
3
Getting atomic writes right
Getting atomic writes right
Suites, scopes, and lifetimes
Suites, scopes, and lifetimes
1
Swift-style map and filter views
Swift-style map and filter views
1
Inline (bytecode) assembly
Inline (bytecode) assembly
Why Python (or any decent language) doesn't need blocks
Why Python (or any decent language) doesn't need blocks
18
SortedContainers
SortedContainers
1
Fixing lambda
Fixing lambda
2
Arguments and parameters, under the covers
Arguments and parameters, under the covers
pip, extension modules, and distro packages
pip, extension modules, and distro packages
Python doesn't have encapsulation?
Python doesn't have encapsulation?
3
Grouping into runs of adjacent values
Grouping into runs of adjacent values
dbm: not just for Unix
dbm: not just for Unix
How to use your self
How to use your self
1
Tkinter validation
Tkinter validation
7
What's the deal with ttk.Frame.__init__(self, parent)
What's the deal with ttk.Frame.__init__(self, parent)
1
Does Python pass by value, or by reference?
Does Python pass by value, or by reference?
9
"if not exists" definitions
"if not exists" definitions
repr + eval = bad idea
repr + eval = bad idea
1
Solving callbacks for Python GUIs
Solving callbacks for Python GUIs
Why your GUI app freezes
Why your GUI app freezes
21
Using python.org binary installations with Xcode 5
Using python.org binary installations with Xcode 5
defaultdict vs. setdefault
defaultdict vs. setdefault
1
Lazy restartable iteration
Lazy restartable iteration
2
Arguments and parameters
Arguments and parameters
3
How grouper works
How grouper works
1
Comprehensions vs. map
Comprehensions vs. map
2
Basic thread pools
Basic thread pools
Sorted collections in the stdlib
Sorted collections in the stdlib
4
Mac environment variables
Mac environment variables
Syntactic takewhile?
Syntactic takewhile?
4
Can you optimize list(genexp)
Can you optimize list(genexp)
MISRA-C and Python
MISRA-C and Python
1
How to split your program in two
How to split your program in two
How methods work
How methods work
3
readlines considered silly
readlines considered silly
6
Comprehensions for dummies
Comprehensions for dummies
Sockets are byte streams, not message streams
Sockets are byte streams, not message streams
9
Why you don't want to dynamically create variables
Why you don't want to dynamically create variables
7
Why eval/exec is bad
Why eval/exec is bad
Iterator Pipelines
Iterator Pipelines
2
Why are non-mutating algorithms simpler to write in Python?
Why are non-mutating algorithms simpler to write in Python?
2
Sticking with Apple's Python 2.7
Sticking with Apple's Python 2.7
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.