I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes. At that time, it was already possible to write things in terms of explicit coroutines (Greg Ewing's original yield from proposal already had a coroutine scheduler as an example, Twisted already had @inlineCallbacks, and asyncio had even been added to the stdlib), but it wasn't in heavy use yet. Things have changed since then, especially with the addition of the async and await keywords to the language (and the popularity of similar constructs in a wide variety of other languages). So, it's time to take a look back (and ahead).

Differences

Automatic waiting

Greenlets are the same thing as coroutines, but greenlet libraries like gevent are not just like coroutine libraries like asyncio. The key difference is that greenlet libraries do the switching magically, while coroutine libraries make you ask for it explicitly.

For example, with gevent, if you want to yield until a socket is ready to read from and then read from the socket when waking up, you write this:
    buf = sock.recv(4096)
To do the same thing with asyncio, you write this:
    buf = await loop.sock_recv(sock, 4096)
Forget (for now) the difference in whether recv is a socket method or a function that takes a socket; the key difference is that await. In asyncio, any time you're going to wait for a value, yielding the processor to other coroutines until you're ready to run, you always do this explicitly, with await. In gevent, you just call one of the functions that automatically does the waiting for you.

In practice, while marking waits explicitly is a little harder to write (especially during quick and dirty prototyping), it seems to be harder to get wrong, and a whole lot easier to debug. And the more complicated things get, the more important this is.

If you miss an await, or try to do it in a non-async function, your code will usually fail hard with a obvious error message, rather than silently doing something undesirable.

Meanwhile, let's say you're using some shared container, and you've got a race on it, or a lock that's being held too long. It's dead simple to tell at a glance whether you have an await between a read and a write to that container, while with automatic waiting, you have to read every line carefully. Being able to follow control flow at a glance is really one of the main reasons people use Python in the first place, and await extends that ability to concurrent code.

Serial-style APIs

Now it's time to come back to the difference between sock.recv and sock_recv(sock). The asyncio library doesn't expose a socket API, it exposes an API that looks sort of similar to the socket API. And, if you look around other languages and frameworks, from JavaScript to C#, you'll see the same thing.

It's hard to argue that the traditional socket API is in any objective sense better, but if you've been doing socket programming for a decade or four, it's certainly more familiar. And there's a lot more language-agnostic documentation on how it works, both tutorial and reference (e.g., if you need to look up the different quirks of a function on Linux vs. *BSD, the closer you are to the core syscall, the easier it will be to find and understand the docs).

In practice, however, the vast majority of code in a nontrivial server is going to work at a higher level of abstraction. Most often, that abstraction will be Streams or Protocols or something similar, and you'll never even see the sockets. If not, you'll probably be building your own abstraction, and only the code on the inside—a tiny fraction of your overall code—will ever see the sockets.

One case where using the serial-style APIs really does help, however, is when you've got a mess of already-written code that's either non-concurrent or using threads or processes, and you want to convert it to use coroutines. Rewriting all that code around asyncio (no matter which level you choose) is probably a non-trivial project; rewriting it around gevent, you just import all the monkeypatches and you're 90% done. (You still need to scan your code, and test the hell out of it, to make sure you're not doing anything that will break or become badly non-optimal, of course, but you don't need to rewrite everything.)

Conclusion

If I were writing the same blog post today, I wouldn't recommend magic greenlets for most massively-concurrent systems; I'd recommend explicit coroutines instead.

There is still a place for gevent. But that place is largely in migrating existing threading-based (or on-concurrent) codebases. If you (and your intended collaborators) are familiar enough with threading and traditional APIs, it may still be worth considering for simpler systems. But otherwise, I'd strongly consider asyncio (or some other explicit coroutine framework) instead.
6

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.