Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.

The Python community has evolved a rough set of unwritten guidelines for how to make the decision, but, being unwritten, it's pretty hard for an outsider/newcomer to determine them. On top of that, over the past couple of years, those guidelines have recently changed dramatically.

Of course my personal take on those guidelines is only that—the personal take of one outsider who's tried to get a sense of what proposals tend to make it in, and why. But maybe it'll still be useful.

Recent changes to be aware of

Almost everyone has pip. (At least almost everyone who has any chance of using the new version of Python that would include your proposed change.)

Almost every package is installable with pip.

Almost every package that requires compilation can be distributed with binary wheels, so Windows users no longer need a compiler to use C extension modules. (And even if you don't build them yourself, Christoph Gohlke probably will if your package is popular enough to be worth considering for the stdlib.)

At least some people consider it reasonable to link to external packages from the core Python docs. (This did happen a few times in the past, but very rarely.)

There's a conscious effort to evangelize PyPI.

These changes are very recent; the average core dev probably has 10-15x more experience with the old days. And people who are closely involved in the PyPA work or improving linux distributions may see these changes as bigger than people closely involved in maintaining long-term-release distributions or teaching students, so it's important to realize where different people in the community are coming from.

Meanwhile, this means "eggslib clearly more appropriate than spamlib for the stdlib, and spamlib is there, so…" is even worse of an argument that usual, unless spamlib was added in 3.4 or later.

Benefits of adding to the stdlib

Unless your application area is broad enough that everyone knows it (as with requests) or niche enough that it has a specialized community of its own separate from the Python community (as with numpy), being the best-known third-party package is still not that the same as being part of the "batteries included" with Python.

Even when it is, there are some developers who can't just install whatever they want off the internet, especially in government agencies and other large organizations, and students are often told they can't use anything that doesn't come with Python for assignments for projects.

So, if the "one obvious way to do it" is to use some module, the benefit is that everyone will be able to do it the one obvious way. (But remember that, especially given the recent, "everyone" may mean, say, 99.5% vs. 99.1% of the people who need it, not 100% vs. 30%.)

Costs of adding to the stdlib

Schedule: New versions of Python come out about every 18 months. Bug fixes can go into dot releases, which come out a bit more often. But many third-party packages are—and need to be—updated much more frequently.

Dependencies: Everything in the stdlib has to depend on the stdlib only. This seems obvious, but it clearly escapes everyone who suggests adding requests but doesn't want to add its three dependencies, or who wants suggests adding lxml but doesn't want to make libxml2 a build and possibly even runtime dependency for Python. 

(Also, combining the two points, note that a stdlib lxml would provide the functionality of the libxml2 that Python was built again, even if the system has a newer libxml2—and many linux systems will. This has been an issue for sqlite3 in the past.)

Portability: Everything in the stdlib needs to work on all supported platforms unless there's a good reason not to be (e.g., a module to handle sysconf variables that only worked on Unix platforms would be fine—but if it only worked on linux, not Mac and other BSDs, it probably wouldn't).

And it needs to work on all Python implementations. In particular, that means that if the code is written in C for speed, it has to be rewritten in Python, with an optional C accelerator to be included with CPython.

Licensing: The code has to be relicensable and reassignable. And the key developer has to be willing to do so. And in some cases, if he wasn't careful early on, he may need to be able to track down other contributors, too. And he needs to be sure that he didn't borrow some incompatible code (well, technically, he already needed to, but "some project on PyPI" is usually going to get less scrutiny than the main Python distribution).

Features: Often, stdlibizing a package means dropping non-core functionality, l10n, and other things that some users may be depending on. Or it may require changes to the API (e.g., to PEP8-ify it). That means a cost for people who want to move from your "pyspam" to the stdlib's "spam".

Effort: Everything in the stdlib has to be maintained. Ideally this means the original developer is willing and able to commit to maintaining his code for a long time to come; if not, someone else credible has to volunteer. And often that person ends up maintaining not just "spam" in the 3.6+ stdlib, but also the classic "pyspam" and/or a "spam36" backport on PyPI as well.

There's a lot of work to be done just to get it into Python in the first place. Someone has to shepherd it through the process, write complete reference documentation in the same format as the stdlib, write unit tests (if existing tests are sufficient, there's a good chance they're written in a different—and friendlier or more expressive—framework than the one used in the stdlib, so they still have to be rewritten anyway), implement all the changes that came out of bikeshedding the proposal, and often backport the now-revised library to put back on PyPI.

Embiggening: The more batteries are included, the bigger Python gets. This doesn't just mean a bigger download, more disk space for installs, longer build and test runs (for both Python itself and PyInstaller-style apps or dock container-style distributions that bundle it). It also means more for anyone to hold in his head to really "get" Python. (Of course there are some third-party modules that already nearly count in the latter sense. Anyone who works in numeric or scientific computing who doesn't know numpy doesn't get Python in practice, even though it isn't in the stdlib…)

Stifling competition: Putting PySpam into the stdlib may stifle development of PyDeviledHam, or other projects that don't even exist yet.

This is usually the first one people think of, but it's often the least important. Usually, when something is worth even considering for the stdlib, there's already a single de facto standard on PyPI. And having urllib in the stdlib clearly hasn't stifled development of requests and other alternatives. And sometimes, there's really nothing worth competing over; what would you want from a JSON library that isn't in the stdlib's?

But there are definitely some cases of competing projects that overlap in functionality rather than duplicate it, or that have fundamental API differences or have fundamentally different performance characteristics (e.g., a sorted dict based on red-black trees is better than one that uses B-trees for some uses, worse for others).

Alternatives

Sometimes, a module as it exists today doesn't fit into the stdlib, but conceptually, a module that "did the same thing" would. People are constantly suggesting that requests should be in the stdlib. It shouldn't; it evolves rapidly, it has external dependencies, and some of its rarely-used features are at best questionable for standardization. But a module that provided the core API of requests without any of those problems isn't too hard to design, and if someone designed and built one and suggested it for stdlib inclusion, I suspect it would make it. Of course most potential "someone"s capable of doing that are people who can just use requests themselves for their own projects. (They also tend to overlap the people who need requests' less-common features, to make things worse.) Which is why it hasn't happened yet.

Most desktop/server OS vendors other than Microsoft include Python, and some of them bless various extra modules, either by pre-installing them (e.g., PyObjC on OS X) or by having packages in some core and/or long-term-supported repo (e.g., python-rpm on some linux distros). Many packages that don't belong in the stdlib may be worth blessing in this way on some platform.

There are third-party "extra batteries" distributions of Python, both commercial and free. While numpy doesn't belong in the Python stdlib, it does belong in Enthought or (x,y).
3

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.

5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I

6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").

1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).

6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'

1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.

3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall

1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

1

Many languages have a for-each loop.

4

When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.

2

In a previous post, I explained in detail how lookup works in Python.

2

The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.

7

In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.

2

I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.

2

In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.

1

Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.

1

About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.

About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.

1

In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.

2

There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.

8

Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.

2

There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.

2

When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.

Let's say you have a good idea for a change to Python.

1

There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.

2

In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.

1

Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.

2

If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.

Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).

There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.

1

Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.

1

Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.

5

Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.

3

This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()

11
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.