It's very common in a program to want to do two things at once: repaginate a document while still responding to user input, or handle requests from two (or 10000) web browsers at the same time. In fact, pretty much any GUI application, network server, game, or simulator needs to do this.

It's possible to write your program to explicitly switch off between different tasks, and there are many higher-level approaches to this, which I've covered in previous posts. But an alternative is to have multiple "threads of control", each doing its own thing independently.

There are three ways to do this: processes, threads, or greenlets. How do you decide between them?

  • Processes are good for running tasks that need to use CPU in parallel and don't need to share state, like doing some complex mathematical calculation to hundreds of inputs.
  • Threads are good for running a small number of I/O-bound tasks, like a program to download hundreds of web pages.
  • Greenlets are good for running a huge number of simple I/O-bound tasks, like a web server. Update: See greenlets vs. explicit coroutines on deciding whether to use automatic greenlets or explicit ones.
If your program doesn't fit one of those three, you have to understand the tradeoffs.

Multiprocessing

Traditionally, the way to have separate threads of control was to have entirely independent programs. And often, this is still the best answer. Especially in Python, where you have helpers like multiprocessing.Process, multiprocessing.Pool, and concurrent.futures.ProcessPoolExecutor to wrap up most of the scaffolding for you.

Separate processes have one major advantage: They're completely independent of each other. They can't interfere with each others' global objects by accident. This can make it easier to design your program. It also means that if one program crashes, the others are unaffected.

Separate processes also have a major disadvantage: They're completely independent of each other. They can't share high-level objects. Processes can pass objects around—which is often a better solution. The standard library solutions do this by pickling the objects; this means that any object that can't be pickled (like a socket), or that would be too expensive to pickle and copy around (like a list of a billion numbers) won't work. Processes can also share buffers full of low-level data (like an array of a billion 32-bit C integers). In some cases, you can pass explicit requests and responses instead (e.g., if the background process is only going to need to get or set a few of those billion numbers, you can send get and set messages; the stdlib has Manager classes that do this automatically for simple lists and dicts). But sometimes, there's just no easy way to make this work.

As a more minor disadvantage, on many platforms (especially Windows), starting a new process is a pretty heavy thing to do. We're not talking minutes here, just milliseconds, but still, if you're kicking off jobs that may only take 5ms to finish, and you add 30ms of overhead to each one, that's not exactly an optimization. Usually, using a Pool or Executor is the easy way around this problem, but it's not always appropriate.

Finally, while modern OS's are pretty good at running, say, a couple dozen active processes and a couple hundred dormant ones, if you push things up to hundreds of active processes or thousands of dormant ones, you may end up spending more time in context-switching and scheduling overhead than doing actual work. If you know that your program is going to be using most of the machine's CPU, you generally want to try to use exactly as many processes as there are cores. (Again, using a Pool or Executor makes this easy, especially since they default to creating one process per core.)

Threading

Almost all modern operating systems have threads. These are like separate processes as far as the operating system's scheduler is concerned, but are still part of the same process in terms of the memory heap, open file table, etc. are concerned.

The advantage of threads over processes is that everything is shared. If you modify an object in one thread, another thread can see it.

The disadvantage of threads is that everything is shared. If you modify an object in two different threads, you've got a race condition. Even if you only modify it in one thread, it's not deterministic whether another thread sees the old value or the new one—which is especially bad for operations that aren't "atomic", where another thread could see some invalid intermediate value.

One way to solve this problem is to use locks and other synchronization objects. (You can also use low-level "interlocked" primitives, like "atomic compare and swap", to build your own synchronization objects or lock-free objects, but this is very tricky and easy to get wrong.)

The other way to solve this problem is to pretend you're using separate processes and pass around copies even though you don't have to.

Python adds another disadvantage to threads: Under the covers, the Python interpreter itself has a bunch of globals that it needs. The CPython implementation (the one you're using if you don't know otherwise) does this by protecting its global state with a Global Interpreter Lock (GIL). So, a single process running Python can only execute one instruction at a time. So, if you have 16 processes, your 16 core machine can execute 16 instructions at once, one per process. But if you have 16 threads, you'll only execute one instruction, while the other 15 cores sit around idle. Custom extensions can work around this by releasing the GIL when they're busy doing non-Python work (NumPy, for example, will often do this), but it's still a problem that you have to profile. Some other implementations (Jython, IronPython, and some non-default-as-of-early-2015 optional builds of PyPy) get by without a GIL, so it may be worth looking at those implementations. But for many Python applications, multithreading means single-core.

So, why ever use threads? Two reasons.

First, some designs are just much easier to think of in terms of shared-everything threading. (However, keep in mind that many designs look easier this way, until you try to get the synchronization right…)

Second, if your code is mostly I/O-bound (meaning you spend more time waiting on the network, the filesystem, the user, etc. than doing actual work—you can tell this because your CPU usage is nowhere near 100%), threads will usually be simpler and more efficient.

Greenlets

Greenlets—aka cooperative threads, user-level threads, green threads, or fibers—are similar to threads, but the application has to schedule them manually. Unlike a process or a thread, your greenlet function just keeps running until it decides to yield control to someone else.

Why would you want to use greenlets? Because in some cases, your application can schedule things much more efficiently than the general-purpose scheduler built into your OS kernel. In particular, if you're writing a server that's listening on thousands of sockets, and your greenlets spend most of their time waiting on a socket read, your greenlet can tell the scheduler "Wake me up when I've got something to read" and then yield to the scheduler, and then do the read when it's woken up. In some cases this can be an order of magnitude more scalable than letting the OS interrupt and awaken threads arbitrarily.

That can get a bit clunky to write, but third-party libraries like gevent and eventlet make it simple: you just call the recv method on a socket, and it automatically turns that into a "wake me up later, yield now, and recv once we're woken up". Then it looks exactly the same as the code you'd write using threads.

Another advantage of greenlets is that you know that your code will never be arbitrarily preempted. Every operation that doesn't yield control is guaranteed to be atomic. This makes certain kinds of race conditions impossible. You still need to think through your synchronization, but often the result is simpler and more efficient.

The big disadvantage is that if you accidentally write some CPU-bound code in a greenlet, it will block the entire program, preventing any other greenlets from running at all instead, whereas with threads it will just slow down the other threads a bit. (Of course sometimes this is a good thing—it makes it easier to reproduce and recognize the problem…)

Update: Greenlets can also be used with explicit waiting. While the older ways of doing this were a bit clumsy, with newer frameworks, the question really is as simple as whether you mark each wait with await (as in asyncio) vs. whether they happen automatically when you call magic functions like socket.recv (as in gevent). What happens under the covers is the same either way. The tl;dr is that there's usually more advantage than disadvantage in marking your yields explicitly, but read greenlets vs. explicit coroutines if you want (a few) more details.
6

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.