This confuses every Python developer the first time they see it—even if they're pretty experienced by the time they see it:
    >>> t = ([], [])
    >>> t[0] += [1]
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <stdin> in <module>()
    ----> 1 t[0] += [1]
    
    TypeError: 'tuple' object does not support item assignment
So far, so good. But now:
    >>> t
    ([1], [])
So the addition succeeded, but also failed!

This is discussed in the official FAQ, but most people don't read that FAQ—and those who do seem to immediately start trying to think of "solutions" without fully understanding the underlying issue. So let's dig into it at a lower level.

How assignment works

Let's take a simpler case first:
    >>> t = [0, 1]
    >>> t[0] = 2
    >>> t
    [2, 1]
In many languages, like C++, what happens here is that t[0] doesn't return 0, but some kind of reference object, which acts like the original 0 in most contexts, but if you modify it—including by assignment—the first element of t sees the change. Like this:
    >>> _x = getitem(t, 0)
    >>> _x.__assign__(2)
But Python doesn't do this. There is no object that's like 0 except when you modify it; there's just 0, which you can't modify. Assignment doesn't modify an object, it just makes the left side into a new name for the value on the right side. And in fact, t[0] isn't even an expression in this context; it's a special thing called an assignment target.

So what actually happens is:
    >>> x = setitem(t, 0, 2)
This avoids some pretty hairy stuff that comes up in languages like C++. If you want to design, say, a tree-based sorted list contained in C++, the __getitem__ has to return a really smart reference-like object (and you have to design and implement those smarts) that knows how to handle __assign__ by changing the value and then moving it to the right new location in the tree to maintain the sorting. And then you have to think about what happens with references to references. (Imagine a sorted list of references to the values in a dictionary.) Also, because you may not want to create this heavy-weight reference object every time someone just wants to see a value, you have to deal with constness—you overload __getitem__ to return a plain value in const contexts, and a fancy reference value otherwise. And so on (and let's not even get into rvalue vs. lvalue references…).

None of that comes up in Python: __getitem__ just returns a plain-old value (an int, in this case), and all the tree-rebalancing just happens in __setitem__.

Of course this means that Python has a whole parallel grammar for assignment targets that's similar to, but not identical to (a subset of) the grammar for expressions. But it's not all that complicated. A target can be:
  • a simple identifier, in which case you're creating or replacing a local, closure, or global variable of that name.
  • an attribution (something ending with ".foo"), in which case you're calling setattr on the thing to the left of the last "." (which _is_ effectively an expression, although with slightly restricted syntax—it has to be a "primary expression").
  • a subscription or slicing (something ending with [1] or [:3]), in which case you're calling setitem on the thing to the left of the last "[", with the value or slice inside the brackets as an argument.
  • a target list (one or more targets separated by commas, like "x, y[0]", optionally inside parens or square brackets—which looks a lot like a tuple or list display expresson), in which case you're doing iterable unpacking, verifying that there are exactly 2 elements, and assigning the first value to the first target and the second to the second.
This is still a bit oversimplified (it doesn't handle extended iterable unpacking with "*spam", chained assignments like "x = y, z = 2, 3", etc.); see the language reference for full details. But it's all pretty intuitive once you get the idea. (By the way, anyone who wants to propose further extensions to assignment targets really needs to understand how the existing system works. The fact that Joshua Landau does is a big part of the reason PEP 448 is under serious consideration, while most such proposals die out after a half-dozen posts on python-ideas…)

How augmented assignment works

Now let's look at this case:
    >>> t = [0, 1]
    >>> t[0] += 2
    >>> t
    [2, 1]
In a C++ style language, this just means reference objects have to have an __iadd__ method that affects the original value, just like their __assign__ method. But Python doesn't have __assign__, so how does it handle this?

Well, for this case, Python could just treat it the same as t[0] = t[0] + 2, like this:
    >>> setitem(t, 0, getitem(t, 0) + 2)
That works fine for immutable objects, but it doesn't work right for mutable objects like lists. For example:
    >>> x = [0]
    >>> t = [x, [1]]
    >>> t[0] += [2]
    >>> t
    [[0, 2], [1]]
    >>> x
    [0, 2]
Here, t[0] refers to the same object as x. You don't just replace it with a different object, you modify the object in-place. To handle this, Python has an __iadd__ method, similar to __add__, but making the change in-place. (In fact, for a list, __iadd__ is the same thing as extend.) So for this case, Python could just treat it the same as t[0].__iadd__([2]). But that wouldn't work for immutable objects like integers.

So, what Python does is both. Since we don't have except expressions (at least not yet), I'll have to fake it with a series of separate statements:
    
    >>> _tmp = getitem(t, 0)
    >>> try:
    >>>     _tmp2 = _tmp.__iadd__([2])
    >>> except AttributeError:
    >>>     _tmp2 = _tmp.__add__([2])
    >>> setitem(t, 0, _tmp2)

Back to the original example

    >>> t = ([], [])
    >>> t[0] += [1]
So, that += first calls getitem(t, 0). So far, so good. Then it calls __iadd__ on the resulting list. Since that's a list, that's fine; the list is extended in-place, and returns self. Then it calls setitem(t, 0, <the modified list>). Since t is a tuple, that raises an exception. But t[0] has already been modified in-place. Hence the confusing result.

Could we fix this somehow?

It's not clear how. Python can't know that setitem is going to raise an exception until it actually calls it. And it can't call it until it has the value to pass. And it doesn't have that value until it calls __iadd__. At which point the list has been modified.

You might argue that Python could know that setitem will raise, by testing hasattr(t, '__setitem__') and skipping the rest of the statement, synthesizing an AttributeError instead. But let's look at a similar case where that can't possibly work:
    >>> f = open('foo')
    >>> f.encoding += '16'
    AttributeError: readonly attribute
The exact same thing is going on here, but with setattr rather than setitem failing. (Fortunately, because a file's encoding is a string, which is immutable, we're not modifying it and then failing, but it's easy to construct an example with a read-only mutable attribute.)

And here, there's no way to predict in advance that it's going to fail. The problem isn't that __setattr__ doesn't exist, but that it raises an exception when called specifically with 'encoding'. If you call it with some different attribute, it'll work fine. What's happened is that io.TextIOWrapper is a C extension type that's declared 'encoding' as a readonly attribute—or, if you're using the pure-Python implementation, _pyio.TextIOWrapper is a Python class that's declared 'encoding' as a @property with no setter. There are plenty of other ways to make __setattr__ fail, and Python can't check them all in advance.

One option that _might_ work would be to add some new methods to the data model, __cansetitem__(i) and __cansetattr__(name), which does nothing if __setitem__(i, value) would work for some value, but raises if __setitem__(i) would raise. Then, Python could do this:
    
    >>> _tmp = getitem(t, 0)
    >>> cansetitem(t, 0)
    >>> try:
    >>>     _tmp2 = _tmp.__iadd__([2])
    >>> except AttributeError:
    >>>     _tmp2 = _tmp.__add__([2])
    >>> setitem(t, 0, _tmp2)
The __cansetitem__ method would be pretty simple, and could be added to existing collection types; you could even make MutableSequence.__cansetitem__ just call __getitem__ and ignore the result and MutableMapping.__cansetitem__ always return successfully and that'll make most third-party ABC-mixing-derived types just work.

The __cansetattr__ method would be a bit more complicated, however. A base implementation in object.__cansetattr__ can fail for readonly attributes of C extension types, for read-only descriptors (to handle @property and custom descriptors), and for non-declared attributes of C extension types and __slots__ types. But every class with a custom __setattr__ would need a custom __cansetattr__. And there are a lot of those out there, so this would be a backward-compatibility nightmare.

Maybe someone can think of a way around this, or come up with a different solution. If you think you've got one, post it to the python-ideas list. But remember that, as Guido says, language design is not just solving puzzles. The fact that __setitem__ and __setattr__ are resolved dynamically is a feature, and a lot of things (including the "no need for reference types" raised above) flow naturally from that feature. And the surprising behavior of "t[0] += [2]" also flows naturally from that feature. So, there's a good chance any "solution" to the problem will actually be unnatural and more surprising under the covers. And solving a problem with deep magic that's harder to understand than the original problem is usually not a good idea.
11

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.