In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:
    (2*)
    lambda x: 2*x

    (*2)
    lambda x: x*2

    (*)
    lambda x, y: x*y
So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous. There is no place in Python where an operator can be legally followed by a close-paren. That works for every binary operator, including boolean and comparison operators. But the grammar is a bit tricky. Without lookahead, how do you make sure that a '(' followed by an expr followed by a binary operator followed by a ')' doesn't start parsing as just a parenthesized expression? I couldn't find a way to make this ambiguous without manually hacking up ast.c. (If you're willing to do that, it's not hard, but you shouldn't be willing to do that.)

The second form, (*2) looks a lot simpler--no lookahead problems. But consider (-2). That's already legal Python! So, does that mean you can't section the + and - operators?

The third form, (*) is the simplest. But it's really tempting to want to be able to do the same with unary operators. Why shouldn't I be able to pass (~) or (not) around as a function instead of having to use operator.not_ and operator.neg? And of course that brings us right back to the problem with + and - being ambiguous. (Plus, it makes the compile step a little harder. But that's not a huge deal.)

I solved these problems using a horrible hack: sectioned operators are enclosed in parens and colons. This looks hideous, but it did let me get things building so I can play with the idea. Now there's no lookahead needed—a colon inside parens isn't valid for anything else (unless you want to be compatible with my bare lambda hack…). And to resolve the +/- issue, only the binary operators can be sectioned, which also means (: -3* :) is a SyntaxError instead of meaning lambda x: -3 * x. Ick. But, again, it's good enough to play with it.

The key grammar change looks like this:
    atom: ('(' [yield_expr|testlist_comp] ')' |
           '(' ':' sectionable_unop ':' ')' |
           '(' ':' sectionable_binop ':' ')' |
           '(' ':' expr sectionable_binop ':' ')' |
           '(' ':' sectionable_binop expr ':' ')' |
           '[' [testlist_comp] ']' |
           '{' [dictorsetmaker] '}' |
           NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')

What about precedence?

Ignored. It only matters if you want to be able to section expressions made up of multiple operators, like (2+3*). Which I don't think you do. For non-trivial cases, there are no readability gains for operator sectioning, and having to think about precedence actually might be a readability cost. If you still don't want to use lambda, do what you'd do in Haskell and compose (2+) with (3*).

AST

For the AST, each of those four productions creates a different node type. Except that you _also_ need separate node types for normal binary operators, comparison operators, and boolean operators, because they have different enums for their operators. So I ended up with 10 new types: UnOpSect, BinOpSect, BinOpSectRight, and BinOpSectLeft, CmpOpSect, etc. There's probably a better way to do this.

Symbol table

How do you deal with an anonymous argument in the symbol table for the function we're going to generate? You don't want to have to create a whole args structure just to insert a name just so you can refer to it in the compiler. Plus, whatever name you pick could collide with a name in the parent scope, hiding it from a lambda or a comprehension that you define inside the expr. (Why would you ever do that? Who knows, but it's legal.)

This problem must have already been solved. After all, generator expressions have created hidden functions that don't collide any names in the outer scope since they were first created, and in 3.x all comprehensions do that. It's a little tricky to actually get at these hidden functions, but here's one way to do it:
    >>> def f(): (i for i in [])
    >>> f.__code__.co_consts
    (None, <code object <genexpr> at 0x10bc57a50, file "<stdin>", line 1>, 'z.<locals>.<genexpr>')
    >>> f.__code__.co_consts[1].co_varnames
    ('.0', 'i')
So, the parameter is named .0 which isn't legal in a def or lambda and can't be referenced. Clever. And once you dig into symtable.c, you can see that this is handled in a function named symtable_implicit_arg. So:
        VISIT(st, expr, e->v.BinOpSectLeft.right);
 if (!symtable_enter_block(st, binopsect,
      FunctionBlock, (void *)e, e->lineno,
      e->col_offset))
     VISIT_QUIT(st, 0);
 if (!symtable_implicit_arg(st, 0))
            VISIT_QUIT(st, 0);
        if (!symtable_exit_block(st, (void *)e))
            VISIT_QUIT(st, 0); 

Compiler

The compilation works similar to lambda. Other than sprintf'ing up a nice name instead of just <lambda>, and the fact that everything is simpler when there's exactly one argument with no defaults and no keywords, everything is the same except the body, which looks like this:
    ADDOP_I_IN_SCOPE(c, LOAD_FAST, 0);
    VISIT_IN_SCOPE(c, expr, e->v.BinOpSectLeft.right);
    ADDOP_IN_SCOPE(c, binop(c, e->v.BinOpSectLeft.op));
    ADDOP_IN_SCOPE(c, RETURN_VALUE);
    co = assemble(c, 1);
I did have to create that ADDOP_I_IN_SCOPE macro, but that's trivial.

Does it work?

    >>> (: *2 :)
    <function <2> at 0x10bc9f048>
    >>> (: *2 :).__code__.co_varnames
    ('.0',)
    >>> (: *2 :)(23)
    46
As you can see, I screwed up the name a bit.

More importantly, I screwed up nonlocal references in the symtable. I think I need to visit the argument? Anyway, what happens is this:
    >>> a = 23
    >>> (: *a :)(23)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 1, in <a>
    SystemError: no locals when loading 'a'
But that's much better than the segfault I expected. :)

Is it useful?

Really, most of the obvious use cases for this are already handled by bound methods, like spam.__add__ instead of (spam+), and the operator module, like operator.add instead of (+). Is that perfect? No:
  • spam.__add__ isn't as flexible as (spam+), because the latter will automatically handle calling its argument's __radd__ when appropriate.
  • Often, you want to section with literals. Especially with integers. But 0.__add__ is ambiguous between a method on an integer literal or a float literal followed by garbage, and therefore a SyntaxError, so you need 0 .__add__ or (0).__add__.
  • For right-sectioning, spam.__radd__ to mean (+spam) isn't so bad, but spam.__gt__ to mean (<spam) is a bit less readable.
Still, it's hard to find a non-toy example where (<0) is all that useful. Most examples I look at, what I really want is something like lambda x: x.attr < 0. In Haskell I'd probably write that by the rough equivalent of composing operator.attrgetter('attr') with (<0). But, even if you pretend that attribution is an operator (even though it isn't) and add sectioning syntax for it, and you use the @ operator for compose (as was proposed and rejected at least twice during the PEP 465 process and at least once since…), the best you can get is (<0) @ (.attr) which still doesn't look nearly as readable to me in Python as the lambda.

And, without a compelling use case, I'm not sure it's worth spending more time debugging this, or trying to think of a clever way to make it work without the colons and without lookahead, or coming up with a disambiguating rule for +/-. (It's obviously never going to make it into core…)

Anything else worth learning here?

When I was having problems getting the symbol table set up (which I still didn't get right…), I realized there's another way to tackle this: Just stop at the AST, which is the easy part. The result, when run normally, is that any operator-sectioning expression resolves to an empty tuple, which doesn't seem all that useful… but you've got an AST node that you can transform with, say, MacroPy. And converting the meaningless AST node into a valid lambda node in Python is a lot easier to building the symbol table and bytecodes in C. Plus, you don't have to rebuild Python every time you make a change.

I don't think this is an argument for adding do-nothing AST structures to the core, of course… but as a strategy for hacking on Python, I may start with that next time around.
1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed.

5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I

6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap").

1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction).

6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it'

1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with.

3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you reall

1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

1

Many languages have a for-each loop.

4

When the first betas for Swift came out, I was impressed by their collection design. In particular, the way it allows them to write map-style functions that are lazy (like Python 3), but still as full-featured as possible.

2

In a previous post, I explained in detail how lookup works in Python.

2

The documentation does a great job explaining how things normally get looked up, and how you can hook them.

But to understand how the hooking works, you need to go under the covers to see how that normal lookup actually happens.

When I say "Python" below, I'm mostly talking about CPython 3.5.

7

In Python (I'm mostly talking about CPython here, but other implementations do similar things), when you write the following:

def spam(x): return x+1 spam(3) What happens?

Really, it's not that complicated, but there's no documentation anywhere that puts it all together.

2

I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.

2

In a recent thread on python-ideas, Stephan Sahm suggested, in effect, changing the method resolution order (MRO) from C3-linearization to a simple depth-first search a la old-school Python or C++.

1

Note: This post doesn't talk about Python that much, except as a point of comparison for JavaScript.

Most object-oriented languages out there, including Python, are class-based. But JavaScript is instead prototype-based.

1

About a year and a half ago, I wrote a blog post on the idea of adding pattern matching to Python.

I finally got around to playing with Scala semi-seriously, and I realized that they pretty much solved the same problem, in a pretty similar way to my straw man proposal, and it works great.

About a year ago, Jules Jacobs wrote a series (part 1 and part 2, with part 3 still forthcoming) on the best collections library design.

1

In three separate discussions on the Python mailing lists this month, people have objected to some design because it leaks something into the enclosing scope. But "leaks into the enclosing scope" isn't a real problem.

2

There's a lot of confusion about what the various kinds of things you can iterate over in Python. I'll attempt to collect definitions for all of the relevant terms, and provide examples, here, so I don't have to go over the same discussions in the same circles every time.

8

Python has a whole hierarchy of collection-related abstract types, described in the collections.abc module in the standard library. But there are two key, prototypical kinds. Iterators are one-shot, used for a single forward traversal, and usually lazy, generating each value on the fly as requested.

2

There are a lot of novice questions on optimizing NumPy code on StackOverflow, that make a lot of the same mistakes. I'll try to cover them all here.

What does NumPy speed up?

Let's look at some Python code that does some computation element-wise on two lists of lists.

2

When asyncio was first proposed, many people (not so much on python-ideas, where Guido first suggested it, but on external blogs) had the same reaction: Doing the core reactor loop in Python is going to be way too slow. Something based on libev, like gevent, is inherently going to be much faster.

Let's say you have a good idea for a change to Python.

1

There are hundreds of questions on StackOverflow that all ask variations of the same thing. Paraphrasing:

lst is a list of strings and numbers. I want to convert the numbers to int but leave the strings alone.

2

In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:

(2*) lambda x: 2*x (*2) lambda x: x*2 (*) lambda x, y: x*y So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous.

1

Many people—especially people coming from Java—think that using try/except is "inelegant", or "inefficient". Or, slightly less meaninglessly, they think that "exceptions should only be for errors, not for normal flow control".

These people are not going to be happy with Python.

2

If you look at Python tutorials and sample code, proposals for new language features, blogs like this one, talks at PyCon, etc., you'll see spam, eggs, gouda, etc. all over the place.

Most control structures in most most programming languages, including Python, are subordinating conjunctions, like "if", "while", and "except", although "with" is a preposition, and "for" is a preposition used strangely (although not as strangely as in C…).

There are two ways that some Python programmers overuse lambda. Doing this almost always mkes your code less readable, and for no corresponding benefit.

1

Some languages have a very strong idiomatic style—in Python, Haskell, or Swift, the same code by two different programmers is likely to look a lot more similar than in Perl, Lisp, or C++.

There's an advantage to this—and, in particular, an advantage to you sticking to those idioms.

1

Python doesn't have a way to clone generators.

At least for a lot of simple cases, however, it's pretty obvious what cloning them should do, and being able to do so would be handy. But for a lot of other cases, it's not at all obvious.

5

Every time someone has a good idea, they believe it should be in the stdlib. After all, it's useful to many people, and what's the harm? But of course there is a harm.

3

This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:

>>> t = ([], []) >>> t[0] += [1] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <stdin> in <module>()

11
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.