It's very common in a program to want to do two things at once: repaginate a document while still responding to user input, or handle requests from two (or 10000) web browsers at the same time. In fact, pretty much any GUI application, network server, game, or simulator needs to do this.

It's possible to write your program to explicitly switch off between different tasks, and there are many higher-level approaches to this, which I've covered in previous posts. But an alternative is to have multiple "threads of control", each doing its own thing independently.

There are three ways to do this: processes, threads, or greenlets. How do you decide between them?

  • Processes are good for running tasks that need to use CPU in parallel and don't need to share state, like doing some complex mathematical calculation to hundreds of inputs.
  • Threads are good for running a small number of I/O-bound tasks, like a program to download hundreds of web pages.
  • Greenlets are good for running a huge number of simple I/O-bound tasks, like a web server. Update: See greenlets vs. explicit coroutines on deciding whether to use automatic greenlets or explicit ones.
If your program doesn't fit one of those three, you have to understand the tradeoffs.

Multiprocessing

Traditionally, the way to have separate threads of control was to have entirely independent programs. And often, this is still the best answer. Especially in Python, where you have helpers like multiprocessing.Process, multiprocessing.Pool, and concurrent.futures.ProcessPoolExecutor to wrap up most of the scaffolding for you.

Separate processes have one major advantage: They're completely independent of each other. They can't interfere with each others' global objects by accident. This can make it easier to design your program. It also means that if one program crashes, the others are unaffected.

Separate processes also have a major disadvantage: They're completely independent of each other. They can't share high-level objects. Processes can pass objects around—which is often a better solution. The standard library solutions do this by pickling the objects; this means that any object that can't be pickled (like a socket), or that would be too expensive to pickle and copy around (like a list of a billion numbers) won't work. Processes can also share buffers full of low-level data (like an array of a billion 32-bit C integers). In some cases, you can pass explicit requests and responses instead (e.g., if the background process is only going to need to get or set a few of those billion numbers, you can send get and set messages; the stdlib has Manager classes that do this automatically for simple lists and dicts). But sometimes, there's just no easy way to make this work.

As a more minor disadvantage, on many platforms (especially Windows), starting a new process is a pretty heavy thing to do. We're not talking minutes here, just milliseconds, but still, if you're kicking off jobs that may only take 5ms to finish, and you add 30ms of overhead to each one, that's not exactly an optimization. Usually, using a Pool or Executor is the easy way around this problem, but it's not always appropriate.

Finally, while modern OS's are pretty good at running, say, a couple dozen active processes and a couple hundred dormant ones, if you push things up to hundreds of active processes or thousands of dormant ones, you may end up spending more time in context-switching and scheduling overhead than doing actual work. If you know that your program is going to be using most of the machine's CPU, you generally want to try to use exactly as many processes as there are cores. (Again, using a Pool or Executor makes this easy, especially since they default to creating one process per core.)

Threading

Almost all modern operating systems have threads. These are like separate processes as far as the operating system's scheduler is concerned, but are still part of the same process in terms of the memory heap, open file table, etc. are concerned.

The advantage of threads over processes is that everything is shared. If you modify an object in one thread, another thread can see it.

The disadvantage of threads is that everything is shared. If you modify an object in two different threads, you've got a race condition. Even if you only modify it in one thread, it's not deterministic whether another thread sees the old value or the new one—which is especially bad for operations that aren't "atomic", where another thread could see some invalid intermediate value.

One way to solve this problem is to use locks and other synchronization objects. (You can also use low-level "interlocked" primitives, like "atomic compare and swap", to build your own synchronization objects or lock-free objects, but this is very tricky and easy to get wrong.)

The other way to solve this problem is to pretend you're using separate processes and pass around copies even though you don't have to.

Python adds another disadvantage to threads: Under the covers, the Python interpreter itself has a bunch of globals that it needs. The CPython implementation (the one you're using if you don't know otherwise) does this by protecting its global state with a Global Interpreter Lock (GIL). So, a single process running Python can only execute one instruction at a time. So, if you have 16 processes, your 16 core machine can execute 16 instructions at once, one per process. But if you have 16 threads, you'll only execute one instruction, while the other 15 cores sit around idle. Custom extensions can work around this by releasing the GIL when they're busy doing non-Python work (NumPy, for example, will often do this), but it's still a problem that you have to profile. Some other implementations (Jython, IronPython, and some non-default-as-of-early-2015 optional builds of PyPy) get by without a GIL, so it may be worth looking at those implementations. But for many Python applications, multithreading means single-core.

So, why ever use threads? Two reasons.

First, some designs are just much easier to think of in terms of shared-everything threading. (However, keep in mind that many designs look easier this way, until you try to get the synchronization right…)

Second, if your code is mostly I/O-bound (meaning you spend more time waiting on the network, the filesystem, the user, etc. than doing actual work—you can tell this because your CPU usage is nowhere near 100%), threads will usually be simpler and more efficient.

Greenlets

Greenlets—aka cooperative threads, user-level threads, green threads, or fibers—are similar to threads, but the application has to schedule them manually. Unlike a process or a thread, your greenlet function just keeps running until it decides to yield control to someone else.

Why would you want to use greenlets? Because in some cases, your application can schedule things much more efficiently than the general-purpose scheduler built into your OS kernel. In particular, if you're writing a server that's listening on thousands of sockets, and your greenlets spend most of their time waiting on a socket read, your greenlet can tell the scheduler "Wake me up when I've got something to read" and then yield to the scheduler, and then do the read when it's woken up. In some cases this can be an order of magnitude more scalable than letting the OS interrupt and awaken threads arbitrarily.

That can get a bit clunky to write, but third-party libraries like gevent and eventlet make it simple: you just call the recv method on a socket, and it automatically turns that into a "wake me up later, yield now, and recv once we're woken up". Then it looks exactly the same as the code you'd write using threads.

Another advantage of greenlets is that you know that your code will never be arbitrarily preempted. Every operation that doesn't yield control is guaranteed to be atomic. This makes certain kinds of race conditions impossible. You still need to think through your synchronization, but often the result is simpler and more efficient.

The big disadvantage is that if you accidentally write some CPU-bound code in a greenlet, it will block the entire program, preventing any other greenlets from running at all instead, whereas with threads it will just slow down the other threads a bit. (Of course sometimes this is a good thing—it makes it easier to reproduce and recognize the problem…)

Update: Greenlets can also be used with explicit waiting. While the older ways of doing this were a bit clumsy, with newer frameworks, the question really is as simple as whether you mark each wait with await (as in asyncio) vs. whether they happen automatically when you call magic functions like socket.recv (as in gevent). What happens under the covers is the same either way. The tl;dr is that there's usually more advantage than disadvantage in marking your yields explicitly, but read greenlets vs. explicit coroutines if you want (a few) more details.
6

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.
I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I
Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").
Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).
If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.
Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'
Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.
One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall
A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.
Many languages have a for-each loop.
When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.
In a previous post, I explained in detail how lookup works in Python.
The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.
In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.
I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.
In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.
Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.
About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.
About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.
In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.
There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.
Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.
There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.
When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.
Let's say you have a good idea for a change to Python.
There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.
In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.
Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.
If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.
Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).
There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.
Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.
Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.
Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.
This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.