There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists. I'll use operator.add because it's just about the fastest function you can call.
c = []
    for ra, rb in zip(a, b):
        c.append([])
        for ca, cb in zip(ra, rb):
            c[-1].append(operator.add(ca, cb))
This is slow. If I run it on two 100x100 lists, it takes 2.59ms. Where is all that time going?
  • 206µs: the fastest possible 100x100 loop
  • 185µs: extra time for the zip generators
  • 770µs: function calls on operator.add
  • 713µs: the actual additions
  • 716µs: appending
So, we're spending 3x as long on overhead as on the actual work.

You can optimize that overhead. Using a list comprehension reduces the time for both the loops and the appends, which gets us down to 2.18ms—a 19% speedup. Manually inlining the add calls into ca+cb reduces the function call time, and with a couple more tricks, ultimately, the fastest I can get this in pure Python is 1.74ms. That's a 49% speedup, which isn't bad, but it's still orders of magnitude slower than what you'd get from the dumbest possible C implementation.

So, let's turn to NumPy. What can NumPy speed up?

Well, storing both matrices in arrays and filling in a pre-allocated target zeros array gets us a little better cache locality and eliminates the costs of reallocating and copying lists that comes from appending. That gets us to 2.39ms—only an 8% speedup.

If we use a vectorized operation, we can turn the loops into C loops, which should eliminate most of the looping time. It also makes the code simpler:
c = np.vectorize(operator.add)(a, b)
But that still only gets us down to 2.08ms—a 25% speedup. It's a little better than the list comprehension, but not much. And now we can't replace the function call with an expression, so this is the end of the line, right?

No! NumPy comes with a bunch of functions that are already vectorized and implemented in C (or, sometimes, Fortran or C++). One of them, np.add, does the same thing as np.vectorize(operator.add), but a whole lot faster. You can also write this as a normal operator.
c = np.add(a, b)
    c = a + b
What's the difference between np.vectorize(operator.add) and np.add?

They both spend about 5µs on preallocating the target array and looping.

But for np.add, that's almost all it has to do—other than that, it's just looping over 0-99 100 times in C, looking up doubles in a couple of C arrays, adding them, and storing the result in another C array. All told, that takes about another 2µs, so our total time is 7.07µs. Yes, microseconds, not milliseconds. That's a 36634% speedup—or, turned around, we've shaved off 99.7% of the time.

By contrast, for np.vectorize(operator.add), inside the loop, it has to make a Python function call to operator.add, and that function has to unbox two Python floats, add them together, and box up a new Python float. We already saw how long those parts take—770µs and 718µs, respectively.

So, the key to using NumPy is simple: it's not about moving the inner loop into NumPy, it's about getting rid of all Python functions in that inner loop. Only call natively-vectorized functions like np.add, or use operators like +, on the arrays.

And notice that in simple cases, this is dead easy: c = a + b is the code from the inner loop, written without the loops. (And yes, you can also just call operator.add(a, b).) If the function you want to vectorize instead returns (a**2 + b**2) ** .5, that's fine, just call that directly on the arrays instead of inside a loop.

Sometimes it's not as easy. Consider this loop:
for i, ra in enumerate(a):
        c.append([])
        for j, ca in enumerate(ra):
            c[-1].append(ca + b[j][i] * i)
Here, we're iterating b in a different order from a, and we're using the indices themselves in the computation. How can you do that? This is where NumPy programming becomes an art rather than just literal translation. You have to learn all the functions NumPy includes, and get to know which ones to reach for when facing each problem, and the only way to get there is through experience. But let's work this one through.

First, notice that the different order we're iterating b would be the same order if b were transposed. So, just transpose it. (If you're worried that this wastes time making a copy, don't worry, it doesn't; it just creates a new view over the same array, with different striding rules attached. But, even if it _did_ make a copy, wasting a few microseconds to save a few milliseconds would still be a huge win.)

Next, how do we get the i values? Well, enumerate basically zips a range with its argument. And we can use a third array for that range.
c = a + b.T * np.arange(a.shape[0])
This takes 42.1µs, compared to 3.16ms for the naive pure Python implementation and 2.16ms for the fastest pure Python implementation.
2

View comments

  1. Very Impressive Python tutorial. The content seems to be pretty exhaustive and excellent and will definitely help in learning Python course. I'm also a learner taken up Python training and I think your content has cleared some concepts of mine. While browsing for Python tutorials on YouTube i found this fantastic video on Python. Do check it out if you are interested to know more python training.:-https://www.youtube.com/watch?v=HcsvDObzW2U

    ReplyDelete
  2. Worthful Python tutorial. Appreciate a lot for taking up the pain to write such a quality content on Python tutorial. Just now I watched this similar Python tutorial and I think this will enhance the knowledge of other visitors for sure. Thanks anyway.:-https://www.youtube.com/watch?v=XreVt254514&t=5s

    ReplyDelete
Hybrid Programming
Hybrid Programming
5
Greenlets vs. explicit coroutines
Greenlets vs. explicit coroutines
6
ABCs: What are they good for?
ABCs: What are they good for?
1
A standard assembly format for Python bytecode
A standard assembly format for Python bytecode
6
Unified call syntax
Unified call syntax
8
Why heapq isn't a type
Why heapq isn't a type
1
Unpacked Bytecode
Unpacked Bytecode
3
Everything is dynamic
Everything is dynamic
1
Wordcode
Wordcode
1
For-each loops should define a new variable
For-each loops should define a new variable
4
Views instead of iterators
Views instead of iterators
2
How lookup _could_ work
How lookup _could_ work
2
How lookup works
How lookup works
7
How functions work
How functions work
2
Why you can't have exact decimal math
Why you can't have exact decimal math
2
Can you customize method resolution order?
Can you customize method resolution order?
1
Prototype inheritance is inheritance
Prototype inheritance is inheritance
1
Pattern matching again
Pattern matching again
The best collections library design?
The best collections library design?
1
Leaks into the Enclosing Scope
Leaks into the Enclosing Scope
2
Iterable Terminology
Iterable Terminology
8
Creating a new sequence type is easy
Creating a new sequence type is easy
2
Going faster with NumPy
Going faster with NumPy
2
Why isn't asyncio too slow?
Why isn't asyncio too slow?
Hacking Python without hacking Python
Hacking Python without hacking Python
1
How to detect a valid integer literal
How to detect a valid integer literal
2
Operator sectioning for Python
Operator sectioning for Python
1
If you don't like exceptions, you don't like Python
If you don't like exceptions, you don't like Python
2
Spam, spam, spam, gouda, spam, and tulips
Spam, spam, spam, gouda, spam, and tulips
And now for something completely stupid…
And now for something completely stupid…
How not to overuse lambda
How not to overuse lambda
1
Why following idioms matters
Why following idioms matters
1
Cloning generators
Cloning generators
5
What belongs in the stdlib?
What belongs in the stdlib?
3
Augmented Assignments (a += b)
Augmented Assignments (a += b)
11
Statements and Expressions
Statements and Expressions
3
An Abbreviated Table of binary64 Values
An Abbreviated Table of binary64 Values
1
IEEE Floats and Python
IEEE Floats and Python
Subtyping and Ducks
Subtyping and Ducks
1
Greenlets, threads, and processes
Greenlets, threads, and processes
6
Why don't you want getters and setters?
Why don't you want getters and setters?
8
The (Updated) Truth About Unicode in Python
The (Updated) Truth About Unicode in Python
1
How do I make a recursive function iterative?
How do I make a recursive function iterative?
1
Sockets and multiprocessing
Sockets and multiprocessing
Micro-optimization and Python
Micro-optimization and Python
3
Why does my 100MB file take 1GB of memory?
Why does my 100MB file take 1GB of memory?
1
How to edit a file in-place
How to edit a file in-place
ADTs for Python
ADTs for Python
5
A pattern-matching case statement for Python
A pattern-matching case statement for Python
2
How strongly typed is Python?
How strongly typed is Python?
How do comprehensions work?
How do comprehensions work?
1
Reverse dictionary lookup and more, on beyond z
Reverse dictionary lookup and more, on beyond z
2
How to handle exceptions
How to handle exceptions
2
Three ways to read files
Three ways to read files
2
Lazy Python lists
Lazy Python lists
2
Lazy cons lists
Lazy cons lists
1
Lazy tuple unpacking
Lazy tuple unpacking
3
Getting atomic writes right
Getting atomic writes right
Suites, scopes, and lifetimes
Suites, scopes, and lifetimes
1
Swift-style map and filter views
Swift-style map and filter views
1
Inline (bytecode) assembly
Inline (bytecode) assembly
Why Python (or any decent language) doesn't need blocks
Why Python (or any decent language) doesn't need blocks
18
SortedContainers
SortedContainers
1
Fixing lambda
Fixing lambda
2
Arguments and parameters, under the covers
Arguments and parameters, under the covers
pip, extension modules, and distro packages
pip, extension modules, and distro packages
Python doesn't have encapsulation?
Python doesn't have encapsulation?
3
Grouping into runs of adjacent values
Grouping into runs of adjacent values
dbm: not just for Unix
dbm: not just for Unix
How to use your self
How to use your self
1
Tkinter validation
Tkinter validation
7
What's the deal with ttk.Frame.__init__(self, parent)
What's the deal with ttk.Frame.__init__(self, parent)
1
Does Python pass by value, or by reference?
Does Python pass by value, or by reference?
9
"if not exists" definitions
"if not exists" definitions
repr + eval = bad idea
repr + eval = bad idea
1
Solving callbacks for Python GUIs
Solving callbacks for Python GUIs
Why your GUI app freezes
Why your GUI app freezes
21
Using python.org binary installations with Xcode 5
Using python.org binary installations with Xcode 5
defaultdict vs. setdefault
defaultdict vs. setdefault
1
Lazy restartable iteration
Lazy restartable iteration
2
Arguments and parameters
Arguments and parameters
3
How grouper works
How grouper works
1
Comprehensions vs. map
Comprehensions vs. map
2
Basic thread pools
Basic thread pools
Sorted collections in the stdlib
Sorted collections in the stdlib
4
Mac environment variables
Mac environment variables
Syntactic takewhile?
Syntactic takewhile?
4
Can you optimize list(genexp)
Can you optimize list(genexp)
MISRA-C and Python
MISRA-C and Python
1
How to split your program in two
How to split your program in two
How methods work
How methods work
3
readlines considered silly
readlines considered silly
6
Comprehensions for dummies
Comprehensions for dummies
Sockets are byte streams, not message streams
Sockets are byte streams, not message streams
9
Why you don't want to dynamically create variables
Why you don't want to dynamically create variables
7
Why eval/exec is bad
Why eval/exec is bad
Iterator Pipelines
Iterator Pipelines
2
Why are non-mutating algorithms simpler to write in Python?
Why are non-mutating algorithms simpler to write in Python?
2
Sticking with Apple's Python 2.7
Sticking with Apple's Python 2.7
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.