When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.

Then, when Python 3.4 came out, people started benchmarking it. For example, this benchmark shows that handling 100000 redis connections, 50 at a time, sending 3-byte messages, takes 3.12 seconds in asyncio vs. 4.55 in gevent.

And yet, I still see people refusing to believe those benchmarks, or their own benchmarks that they run to prove them wrong.

So, why isn't it slow? Isn't Python really slow at looping? Isn't libev a tightly-optimized reactor core?

The problem is that, as is so common, people are focusing on optimizing the wrong part of the code.

Under the covers, gevent and asyncio do similar things:
  • Call a function like epoll or kqueue to wait on a whole mess of sockets and see which ones are ready for input or output.
  • For each of those sockets:
    • Read the data.
    • Look up a coroutine in some kind of map.
    • Resume that coroutine.
    • Execute the actual Python code in that coroutine up to the next yield from/implicit suspend point.
Now, which parts of that would you expect libev to be able to optimize? Iterating over hundreds of sockets may be orders of magnitude slower in Python, but it's such a tiny percentage of the total time spent that it can't possibly matter. Remember, for each socket in that loop, you're going to make an I/O-bound syscall, do a context switch, and then interpret a bunch of Python code. It's going to be one of those things that takes time, and there's no way to optimize any of them by rewriting some completely irrelevant piece of code in (even tightly optimized) C.

From my own unscientific tests, using Fantix's work-in-progress port of gevent to Python 3 vs. asyncio, there's rarely a significant difference in either direction. I more often see a difference between Python 2.7 and 3.4. If I need to deal with lots of Unicode text, 3.4 is much faster; if I don't actually need to deal with it as Unicode but 3.x makes it hard for me to avoid doing so, 2.7 is much faster; if neither of those is relevant, 3.4 is sometimes significantly faster but sometimes not noticeably so.

There are of course sometimes good reasons to use gevent instead of asyncio. If you've got a bunch of threaded code that you want to convert over to using coroutines, for example, gevent makes it trivial. But using gevent it because you're absolutely sure that asyncio must be slow because Python code is slow, that's silly.
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.