Never mind…

As pointed out by Nick Coghlan, a generator expression translates the argument of the outermost for clause into the argument to the generator function, not a local within the generator function. For example, this:

    lines = (line for line in file if line)

…is not equivalent to this:

    def gen():
        for line in file:
            if line:
                yield line
    lines = gen()

… but this:

    def gen(file):
        for line in file:
            if line:
                yield line
    lines = gen(file)

So, this:

    lines = (line with open(path) as file for line in file)

…would be equivalent to this:
    def gen(file):
        for line in file:
            if line:
                yield line
    with open(path) as file:
        lines = gen(file)

As 6.2.8 Generator expressions (5.2.6 in 2.x) says:
Variables used in the generator expression are evaluated lazily when the__next__() method is called for generator object (in the same fashion as normal generators). However, the leftmost for clause is immediately evaluated, so that an error produced by it can be seen before any other possible error in the code that handles the generator expression. Subsequent for clauses cannot be evaluated immediately since they may depend on the previous for loop. For example: (x*y for x in range(10)for y in bar(x)).

Original proposal

This idea is not the same as the oft-suggested "with expression". I'll explain why with expressions are both different and useless later.

The idea is to add an optional "with clause" to the existing generator expression syntax.

Here's a basic example:

    upperlines = (line.upper() with open('foo', 'r') as file for line in file)

And here's the translation to a generator function:

    def foo():
        with open('foo', 'r') as file:
            for line in file:
                yield line.upper()
    upperlines = foo()

The key here is that the file stays open until the expression finishes iterating over it. This is not the same as wrapping a with statement around a generator expression, where the file only stays open until the generator is built.

Rationale

Most people don't notice this feature is missing—but they work around it anyway. If you look over a few tutorials for modules on PyPI, questions and answers at StackOverflow, etc., you find people doing this:

    upperlines = (line.upper() for line in open('foo', 'r'))

Everyone knows this is wrong, because it relies on refcounting, and therefore only works in CPython. In fact, it's even wrong in CPython, because the file isn't closed until the last reference to the generator goes away, instead of when it finishes iterating over the file. And yet, people do it all the time.

This is why with statements were added to Python. Except here, they don't work. Compare:

    with open('foo', 'r') as file:
        contents = file.read()
    doStuff(contents) # works
    with open('foo', 'r') as file:
        contents = (line.upper() for line in file)
    doStuff(contents) # raises a ValueError


Of course in this trivial case, you can just indent the doStuff call, but in general, there is no inherent static scope for generator objects—they may get stored and used later (especially when they're passed into other generators). And even in the trivial case, the with statement isn't quite right—doStuff might iterate over contents, then do a bunch of other stuff before returning, and the file will be left open unnecessarily the whole time.

Here's a slightly larger example. Imagine a mail server built around a traditional select reactor, where send_async pushes a file onto a queue that will be sent as the socket is ready, and closed when it's done:

    class Connection:

        def handle_get(self, message_id):
            path = os.path.join(mailbox_path, message_id)
            self.send_async(open(path, 'r'))

Now imagine that you need to process each line of that message—e.g., to de-activate HTTP URLs, censor bad words, recognize phone numbers and turn them into links, whatever:

    class Connection:
        def handle_get(self, message_id):
            path = os.path.join(mailbox_path, message_id)
            self.send_async(self.process(line) for line in open(path, 'r'))

Oops, now we're hosed—the reactor may close the generator, but that won't cause the file to get closed.

The only way to do this correctly today is to write the explicit generator function instead of using a generator expression.


    class Connection:
        def handle_get(self, message_id):
            path = os.path.join(mailbox_path, message_id)
            def processed_file():
                with open(path, 'r') as f:
                    for line in f:
                        yield self.process(line)
            self.send_async(processed_file())


That's more verbose, less transparent even for experts, and impenetrable for new users—the same reason generator expressions were added to the language in the first place.

But it's even worse here. In my experience, even people who know better rarely think to write the generator function. When you point out the leaky-open problem in their code, after trying the with statement and finding it doesn't work, they either figure out the right place to put in an explicit close, or they write a class to manage the file and the iteration together. In other words, they treat Python like C++, because Python doesn't offer any obvious right way to do it.

But it could be this simple:


    class Connection:
        def handle_get(self, message_id):
            path = os.path.join(mailbox_path, message_id)
            self.send_async(self.process(line) 
                            with open(path, 'r') as f for line in f)

Syntax

The name "with" is pretty obvious; the question is, where to put it.

Adding with to the expression the same way as "if" means the right place for it is to the left of the "for" statement, so the top-down order of the nested statements in the equivalent generator function is reflected in the left-right order of the nested clauses in the expression.

A number of people have suggested that it would look more readable if the with statement came at the end:

    upperlines = (line.upper() for line in file with open('foo', 'r') as file)
    upperlines = (line.upper() with open('foo', 'r') as file for line in file)

However, the latter is more consistent, and easier to add to the parser or explain as a rule to new users, so that's what I'm proposing, even if it doesn't look as nice in the simplest cases.

Either way, looking over the syntax, it's obvious that this change would not affect the meaning of any valid generator expression; it would only make certain lines (that no one would ever have written) that are currently SyntaxErrors into valid expressions. And there shouldn't be anything tricky about adding it into the parser.

Comprehensions

This feature isn't actually necessary in list comprehensions and dictionary comprehensions, because the iteration is finished as soon as the expression completes. But it doesn't hurt, and it might make the language more consistent, easier to learn, and even easier to parse.

If so, I suggest adding it to comprehensions as well, but I don't really care strongly either way.

With clauses vs. with expressions

Nearly every person I've suggested this to has said, "Why not just a general with expression instead"?

At first glance, that sounds nice, because then you could also write this:

    contents = file.read() with open('foo', 'r') as file

That's the motivating example for every suggestion for with expressions ever.

But anything that can usefully written using a with expression can be rewritten using a with statement:

    with open('foo', 'r') as file: contents = file.read()

Anything that won't work using a with statement—because the context needs to match the dynamic lifetime of some other object defined in the expression—will not be helped by a with expression. This is obvious for most cases:

    contents = iter(file) with open('foo', 'r') as file

This is clearly going to raise a ValueError as soon as you try to iterate contents. But it's just as true for generator expressions as for anything else. Consider what the following would mean if Python had a with expression:

    contents = (line for line in file with open('foo', 'r') as file)
    contents = (line for line in file) with open('foo', 'r') as file
    contents = (line with open('foo', 'r') as file for line in file)


In the first one,  the with expression modifies file, which means the file is closed before you even start iterating. In the second, it modifies the entire expression, which means the file is closed as soon as the generator is done being created. In the third, it modifies line, so there is no file to iterate over. There's no way to write what we need using a general with expression.

Of course it would be possible to add both with clauses and with expressions, and write some parser magic to figure out that with above is a generator-expression with clause, not a with expression. (You'd also have to write the magic for comprehensions.) After all, ternary if expressions don't help define if clauses in generator expressions, but we have them both. But, even if this were a useful idea, it would be a completely unconnected idea, except for the fact that implementing both of them is slightly more difficult than just one. And for the reasons given above, I don't think general with expressions are a good idea in the first place.

In short, with expressions wouldn't remove the need for with clauses in generator expressions; they'd just made it harder to add them to the language.

Doesn't this violate DRY?

A few people I've mentioned this idea to have objected on the basis that (line with open('foo', 'r') as file for line in file) makes me repeat both line and file twice.

The case for the repetition of file here is exactly the same as for the repetition of line. Yes, it's mildly annoying in completely trivial cases, but there's no way around it if you ever want to write non-trivial cases. Compare:

    (line[2:] with open('foo', 'r') as file for line in file)
    (line with open('foo', 'r') as file for line in decompress(file))

Besides, most of the people who make this argument are the kind of people who already avoid generator expressions because they believe (line[2:] for line in file) can be better written as:

    map(operator.itergetter(slice(2, None)), file)

I don't think there's any point catering to these people, whose real objection is usually "Lisp is a perfectly good language without the need for these new-fangled comprehensions and generator expressions"; the only reasonable answer is that Lisp was not actually outlawed when Haskell was invented.
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.