Let's say you have a network server built around a single-threaded event loop—gevent, Twisted, asyncore, something simple you built on top of select.select, whatever. It's important to handle every event quickly. If you take 30 seconds responding to a requests from one connection, you can't deal with anyone else's requests, or accept new connections, for that whole 30 seconds.

But what if the request actually takes 30 seconds' worth of work to handle?

This is exactly what threads are for. You package up a job that does the 30 seconds of work and responds to the request when it's done, and kick it out of the event loop to run in parallel, then you can move on to the next event without waiting for it to finish.

The same thing applies to GUI programs. Clicking a button can't stop the interface from responding for 30 seconds. So, if the button means there's 30 seconds of work to do, kick it out of the event loop.

Thread pools

The idea of a thread pool is pretty simple: Your code doesn't have to worry about finding a thread to use, or starting a thread and making sure there aren't too many, or anything like that. You just package up a job, and tell the pool to run it as soon as it can. If there's an idle thread, it'll run your job immediately. If not, your job will go into a queue to be run in its turn.

You can build a thread pool pretty easily, but you don't have to, because Python has a really simple one in the standard library: concurrent.futures. (If you're using Python 3.1 or earlier, including 2.7, install and download the backport).

Let's say you've built a server that looks like this:

    class FooHandler(BaseHandler):
        def ping(self, responder):
            responder('pong')
        def time(self, responder):
            responder(time.time())
        def add(self, x, y):
            responder(x+y)
        def sumrange(self, count):
            responder(sum(range(count)))

The problem is that if someone calls sumrange with a count of 1000000000, that may take 30 seconds, and during those 30 seconds, your server is not listening to anyone else. So, how do we put that on a thread?

    class FooHandler(BaseHandler):
        def __init__(self):
            self.executor = ThreadPoolExecutor(max_workers=8)
        def ping(self, responder):
            responder('pong')
        def time(self, responder):
            responder(time.time())
        def add(self, x, y):
            responder(x+y)
        def sumrange(self, count):
            self.executor.submit(lambda: responder(sum(range(count))))

That's all there is to it. Now, sumrange can take as long as it wants, and nobody else is blocked.

Process pools

What happens if two people call sumrange(1000000000) at the same time?

Since each request gets a separate thread, and your machine probably has at least two cores, it should still take around 30 seconds to answer both of them, right?

Not in Python. The GIL (Global Interpreter Lock) prevents two Python threads from doing CPU work at the same time. If your threads are doing more I/O than CPU—which is usually the case for servers—that's not a problem, but sumrange is clearly all CPU work. So, it'll take 60 seconds to answer both of them.

Not only that, but in Python 2, there were some problems with the GIL that could cause it to take more than twice as long, and even to intermittently block up the main thread.

What's the solution?

Use processes instead of threads. There's more overhead to passing information between processes than between threads, but all we're passing here is an integer and some "responder" object that's probably ultimately just a simple wrapper around a socket handle.

There are also restrictions on what you can pass. Depending on how that responder was coded, you may be able to pass it, you may not. If you can, the change is as easy as this:

    def __init__(self):
        self.executor = ProcessPoolExecutor(max_workers=8)

If the responder object wasn't coded in a way that lets you pass it between processes, you'll get an error like this:

    _pickle.PicklingError: Can't pickle : attribute lookup function failed

Futures

So, what do you do when you get that error?

If you can't give the background process the responder object, you need to have it return you the value, so you can call responder on it. But how do you do that?

That "executor.submit" call actually returns something, called a future. We ignored it before, but it's exactly what we need now. A future represents a result that will exist later. In our case, the job is sum(range(count)), so the result is just the sum that the child process will pass back when it finishes all that work.

What good is that?

Well, you can do four basic things with a future:

First, you can wait for it to finish by calling result(), but that brings you right back to the starting point—you sit around for 30 seconds blocking the event loop.

You can wait on a batch of futures at once. That's pretty cool, because it means that a single thread could wait on the results of 1000 jobs run across 8 child processes. But you don't want to have to write that thread if you don't have to, right? Ideally, your event loop framework would know how to wait for futures the same way it waits for sockets. But, unless you're using the PEP 3156 prototype, yours probably doesn't.

You can poll a future, or a batch of futures, to see if it's done. Which means you could just poll your whole list of futures each time through the event loop. But what if some poor user is waiting for his sum, and no other events happen for 10 minutes? So you need to add some kind of timeout or on_timer event or equivalent to make sure you keep checking at least, say, once/second if there are any futures to check.

Finally, you can just attach a callback that will get run whenever the future is finished. Which… doesn't have any down side at all. That's all you want here: to run responder on the result whenever it's ready.

This is a bit tricky to get your head around, but an example shows how simple it actually is:

    def sumrange(self, count):
        future = self.executor.submit(lambda: sum(range(count)))
        future.add_done_callback(lambda f: responder(f.result()))
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.