In my post on arguments and parameters, I explained how arguments get matched up to parameters in Python. But how does this actually work under the covers?

Lower-level languages

In a language like C, a function call is compiled into something like this (assuming for simplicity that all params go on the stack, and the return value as well):

  • Push registers onto the stack for safe keeping.
  • Push a copy of each argument—cast to the right type for the appropriate parameter if necessary—onto the stack.
  • Push the current instruction pointer onto the stack.
  • Jump to the function's code. (Often a single opcode does both this and the previous step.)
  • Pop the return value off the stack.
  • Pop the stashed registers off the stack.
So, a function definition has to be compiled into something like this:
  • Push the old stack frame pointer onto the stack.
  • Create a new frame by reserving space on the stack for locals.
  • Run the actual function code.
  • Free the new frame.
  • Pop and restore the old frame pointer.
  • Pop the parameters.
  • Pop the stashed instruction pointer off the stack.
  • Push a copy of the return value—cast to the right type for the function's return type if necessary—onto the stack.
  • Jump to the stashed instruction pointer. (Often a single instruction does two or more of the last four steps.) 
The key here is that, at compile time, the function has to be able to copy/cast each argument and the return value onto the stack. Different types have different sizes and structures; you can't just "push a value", you have to push an 8-byte value or a 32-byte value. This is why varags are so complicated in C.

Python

In Python, everything is a lot simpler, for two reasons.

First, Python doesn't use a stack of bytes, it uses a stack of objects. They're all the same size and shape. And Python never copies or casts objects for you, it just passes around references. (As you can probably guess, in CPython, the stack is actually a stack of PyObject pointers.)

Second, because the interpreter has information about the function at runtime (in attributes on the function object and its code object), and has to use that information anyway to make things like dynamically calling functions created at runtime, the compiler doesn't need any of that information in advance.

So, this means the compiler has almost no work to do for either function calls or function definitions.

A function call does this:
  • Push the function object onto the stack.
  • Push the arguments (each positional argument, then each element of a *args if any, then the name and then value of each keyword argument, then the name and value of each item in a **kwargs if any).
  • Do a CALL_FUNCTION with the count of positional arguments and the count of keyword arguments (including expanded *args and **kwargs in the counts).
  • Pop the return value off the stack.
And a function definition does this:
  • Run the actual function code, which will include one or more RETURN_VALUE opcodes.
The Python stack is a stack of objects, not bytes; they're all the same size (one object), and carry their type with them. Pushing doesn't copy anything, it just makes another reference to the same object. (As you can guess, under the covers, in CPython this is implemented as a stack of PyObject pointers.)

So, the compiler doesn't need to know anything about the function being called to compile a function call. And it also doesn't need to know anything about the calling environment to compile a function definition. And the function code doesn't need any prologue or epilogue, it automatically has a frame ready to go with its local variables; everything just works like magic.

The compiler doesn't have to know anything about the function's parameters, because the interpreter does. As the docs for CALL_FUNCTION say, it:

Calls a function. The low byte of argc indicates the number of positional parameters, the high byte the number of keyword parameters. On the stack, the opcode finds the keyword parameters first. For each keyword argument, the value is on top of the key. Below the keyword parameters, the positional parameters are on the stack, with the right-most parameter on top. Below the parameters, the function object to call is on the stack. Pops all function arguments, and the function itself off the stack, and pushes the return value.
In other words, CALL_FUNCTION has to be where the magic happens.

The key is that all of the information needed to execute a function call is stored as attributes on the function object or its code object. The inspect module docs have a nice list of all of the relevant attributes.

So, when you do a CALL_FUNCTION with P positional args and K keyword args, here's how that works.

First, it pops everything off the stack, building a dict of kwargs and a (reversed) tuple of args, and creates a frame with the appropriate f_back, with f_code set to the function's __code__, etc.

Then it can match arguments to parameters in the frame's f_locals.

The most important attribute is co_varnames. This holds all normal parameter names, then all keyword-only parameter names, then the *args name (if present, as indicated by co_flags & 4), then the **kwargs name (if present, co_flags & 8), then any locals defined in the function.

For positional arguments: Up to co_argcount, they get bound to the corresponding name from co_varnames. Extra positional arguments get stuck in a tuple that gets bound to the *args name (or raise a TypeError if there isn't one).

For keyword arguments, they just get bound by name if present as parameter names (raising a TypeError if the name was already bound by a positional or keyword argument), and any leftovers are bound as a dict to the **kwargs name (or raise a TypeError if there isn't one).

If there are missing positional parameters at the end, the default values are found by index from the functions __defaults__. (If there are non-contiguous missing parameters, because you filled one in with a keyword argument, that's a TypeError. If there aren't enough values in the __defaults__ tuple, that's also a TypeError.) If there are missing keyword-only parameters at the end, the values are looked up by name in the function's __kwdefaults__ dict.

So, after all this, unless we raised a TypeError and bailed, the frame's locals are set up properly, so CALL_FUNCTION can set up the rest of the frame attributes and jump to the start of the function's bytecode, which can just be the bytecode for the first statement in the function's source.

Extra details

CPython actually makes things a bit more complicated, because locals are actually stored as an array of objects, which can be indexed by LOAD_FAST/STORE_FAST opcodes (or LOAD_DEREF/STORE_DEREF, if used as a closure variable in a local function) rather than looked up repeatedly by name; f_locals is basically there to support the locals() function and other inspection. So, instead of just binding each parameter by name in f_locals, it also binds it by index in the array. For keyword arguments, it does this by reverse-lookup on co_varnames; for positional arguments and __defaults__, of course, it already has the index.

CPython also does things in a different order, and mutates kwargs in-place, and does various other things that make sense in the C API but not in Python itself. But none of this really changes anything.
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.