In Haskell, you can section infix operators. This is a simple form of partial evaluation. Using Python syntax, the following are equivalent:
    (2*)
    lambda x: 2*x

    (*2)
    lambda x: x*2

    (*)
    lambda x, y: x*y
So, can we do the same in Python?

Grammar

The first form, (2*), is unambiguous. There is no place in Python where an operator can be legally followed by a close-paren. That works for every binary operator, including boolean and comparison operators. But the grammar is a bit tricky. Without lookahead, how do you make sure that a '(' followed by an expr followed by a binary operator followed by a ')' doesn't start parsing as just a parenthesized expression? I couldn't find a way to make this ambiguous without manually hacking up ast.c. (If you're willing to do that, it's not hard, but you shouldn't be willing to do that.)

The second form, (*2) looks a lot simpler--no lookahead problems. But consider (-2). That's already legal Python! So, does that mean you can't section the + and - operators?

The third form, (*) is the simplest. But it's really tempting to want to be able to do the same with unary operators. Why shouldn't I be able to pass (~) or (not) around as a function instead of having to use operator.not_ and operator.neg? And of course that brings us right back to the problem with + and - being ambiguous. (Plus, it makes the compile step a little harder. But that's not a huge deal.)

I solved these problems using a horrible hack: sectioned operators are enclosed in parens and colons. This looks hideous, but it did let me get things building so I can play with the idea. Now there's no lookahead needed—a colon inside parens isn't valid for anything else (unless you want to be compatible with my bare lambda hack…). And to resolve the +/- issue, only the binary operators can be sectioned, which also means (: -3* :) is a SyntaxError instead of meaning lambda x: -3 * x. Ick. But, again, it's good enough to play with it.

The key grammar change looks like this:
    atom: ('(' [yield_expr|testlist_comp] ')' |
           '(' ':' sectionable_unop ':' ')' |
           '(' ':' sectionable_binop ':' ')' |
           '(' ':' expr sectionable_binop ':' ')' |
           '(' ':' sectionable_binop expr ':' ')' |
           '[' [testlist_comp] ']' |
           '{' [dictorsetmaker] '}' |
           NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')

What about precedence?

Ignored. It only matters if you want to be able to section expressions made up of multiple operators, like (2+3*). Which I don't think you do. For non-trivial cases, there are no readability gains for operator sectioning, and having to think about precedence actually might be a readability cost. If you still don't want to use lambda, do what you'd do in Haskell and compose (2+) with (3*).

AST

For the AST, each of those four productions creates a different node type. Except that you _also_ need separate node types for normal binary operators, comparison operators, and boolean operators, because they have different enums for their operators. So I ended up with 10 new types: UnOpSect, BinOpSect, BinOpSectRight, and BinOpSectLeft, CmpOpSect, etc. There's probably a better way to do this.

Symbol table

How do you deal with an anonymous argument in the symbol table for the function we're going to generate? You don't want to have to create a whole args structure just to insert a name just so you can refer to it in the compiler. Plus, whatever name you pick could collide with a name in the parent scope, hiding it from a lambda or a comprehension that you define inside the expr. (Why would you ever do that? Who knows, but it's legal.)

This problem must have already been solved. After all, generator expressions have created hidden functions that don't collide any names in the outer scope since they were first created, and in 3.x all comprehensions do that. It's a little tricky to actually get at these hidden functions, but here's one way to do it:
    >>> def f(): (i for i in [])
    >>> f.__code__.co_consts
    (None, <code object <genexpr> at 0x10bc57a50, file "<stdin>", line 1>, 'z.<locals>.<genexpr>')
    >>> f.__code__.co_consts[1].co_varnames
    ('.0', 'i')
So, the parameter is named .0 which isn't legal in a def or lambda and can't be referenced. Clever. And once you dig into symtable.c, you can see that this is handled in a function named symtable_implicit_arg. So:
        VISIT(st, expr, e->v.BinOpSectLeft.right);
 if (!symtable_enter_block(st, binopsect,
      FunctionBlock, (void *)e, e->lineno,
      e->col_offset))
     VISIT_QUIT(st, 0);
 if (!symtable_implicit_arg(st, 0))
            VISIT_QUIT(st, 0);
        if (!symtable_exit_block(st, (void *)e))
            VISIT_QUIT(st, 0); 

Compiler

The compilation works similar to lambda. Other than sprintf'ing up a nice name instead of just <lambda>, and the fact that everything is simpler when there's exactly one argument with no defaults and no keywords, everything is the same except the body, which looks like this:
    ADDOP_I_IN_SCOPE(c, LOAD_FAST, 0);
    VISIT_IN_SCOPE(c, expr, e->v.BinOpSectLeft.right);
    ADDOP_IN_SCOPE(c, binop(c, e->v.BinOpSectLeft.op));
    ADDOP_IN_SCOPE(c, RETURN_VALUE);
    co = assemble(c, 1);
I did have to create that ADDOP_I_IN_SCOPE macro, but that's trivial.

Does it work?

    >>> (: *2 :)
    <function <2> at 0x10bc9f048>
    >>> (: *2 :).__code__.co_varnames
    ('.0',)
    >>> (: *2 :)(23)
    46
As you can see, I screwed up the name a bit.

More importantly, I screwed up nonlocal references in the symtable. I think I need to visit the argument? Anyway, what happens is this:
    >>> a = 23
    >>> (: *a :)(23)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 1, in <a>
    SystemError: no locals when loading 'a'
But that's much better than the segfault I expected. :)

Is it useful?

Really, most of the obvious use cases for this are already handled by bound methods, like spam.__add__ instead of (spam+), and the operator module, like operator.add instead of (+). Is that perfect? No:
  • spam.__add__ isn't as flexible as (spam+), because the latter will automatically handle calling its argument's __radd__ when appropriate.
  • Often, you want to section with literals. Especially with integers. But 0.__add__ is ambiguous between a method on an integer literal or a float literal followed by garbage, and therefore a SyntaxError, so you need 0 .__add__ or (0).__add__.
  • For right-sectioning, spam.__radd__ to mean (+spam) isn't so bad, but spam.__gt__ to mean (<spam) is a bit less readable.
Still, it's hard to find a non-toy example where (<0) is all that useful. Most examples I look at, what I really want is something like lambda x: x.attr < 0. In Haskell I'd probably write that by the rough equivalent of composing operator.attrgetter('attr') with (<0). But, even if you pretend that attribution is an operator (even though it isn't) and add sectioning syntax for it, and you use the @ operator for compose (as was proposed and rejected at least twice during the PEP 465 process and at least once since…), the best you can get is (<0) @ (.attr) which still doesn't look nearly as readable to me in Python as the lambda.

And, without a compelling use case, I'm not sure it's worth spending more time debugging this, or trying to think of a clever way to make it work without the colons and without lookahead, or coming up with a disambiguating rule for +/-. (It's obviously never going to make it into core…)

Anything else worth learning here?

When I was having problems getting the symbol table set up (which I still didn't get right…), I realized there's another way to tackle this: Just stop at the AST, which is the easy part. The result, when run normally, is that any operator-sectioning expression resolves to an empty tuple, which doesn't seem all that useful… but you've got an AST node that you can transform with, say, MacroPy. And converting the meaningless AST node into a valid lambda node in Python is a lot easier to building the symbol table and bytecodes in C. Plus, you don't have to rebuild Python every time you make a change.

I don't think this is an argument for adding do-nothing AST structures to the core, of course… but as a strategy for hacking on Python, I may start with that next time around.
1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.