The first time you try to write a network client or server directly on top of sockets, you do something like this:

for filename in filenames:
    with open(filename, 'rb') as f:
        sock.sendall(f.read())

And then, on the other side:

for i in count(0):
    msg = sock.recv(1<<32)
    if not msg: break
    with open('file{}'.format(i), 'wb') as f:
        f.write(msg)

At first, this seems to work, but it fails on larger files. Or as soon as you try to use it across the internet. Or 1% of the time. Or when the computer is busy.

In reality, it can't possibly work, except in special circumstances. A TCP socket is a stream of bytes. Every time you call send (or sendall), you put more bytes on that stream. Every time you call recv, you get some or all of the bytes on the stream.

Let's say you send 1000 bytes, then send 1000000 bytes, and the other side calls recv. It might get 1000 bytes—but it just as easily might get 1001000 bytes, or 7, or 69102. There is no way to guarantee that it gets just the first send.

Why does it seem to work in my initial tests?

If you send 10000 bytes to localhost while the other side is waiting to receive something, and there's enough idle time on your CPUs to run the other side's code, your OS will probably just copy your 10000-byte send buffer over to the other side's receive buffer and give it the data all at once. It's not guaranteed, but it will usually happen. But only if the other side is on the same machine (or at least in the same bridged LAN), and your data fits into a single buffer, and you don't finish two sends before it gets enough CPU time to do its receive, and so forth.

So, what's the solution?

The key is that the other side has to somehow know that the next 1000 bytes are the message it wants. This means that unless you have some out-of-band way of transmitting that information, or your messages are some type that inherently includes that information (e.g., JSON objects, or MIME messages), you have to create a byte stream that has not just your messages, but enough information to tell where one message ends and the next begins.

In other words, you have to design, and then implement, a protocol.

That sounds scary… but it really isn't.

A simple protocol

Assuming your messages can't be more than 4GB long, just send the length, packed into exactly 4 bytes, and then you send the data itself. So, the other side always knows how much to read: Read exactly 4 bytes, unpack it into a length, then read exactly as many bytes as that:

def send_one_message(sock, data):
    length = len(data)
    sock.sendall(struct.pack('!I', length))
    sock.sendall(data)

def recv_one_message(sock):
    lengthbuf = recvall(sock, 4)
    length, = struct.unpack('!I', lengthbuf)
    return recvall(sock, length)

That's almost a complete protocol. The only problem is that Python doesn't have a recvall counterpart to sendall, but you can write it yourself:

def recvall(sock, count):
    buf = b''
    while count:
        newbuf = sock.recv(count)
        if not newbuf: return None
        buf += newbuf
        count -= len(newbuf)
    return buf

Protocol design

There are a few problems with the protocol described above:
  • Binary headers are hard to read, generate, or debug as a human.
  • Headers with no redundancy are not at all robust. One mistake, and you get out of sync and there's no way to recover. Worse, there's no way to even notice that you've made a mistake. You could read 4 arbitrary bytes from the middle of a file as a header, and think it means there's a 3GB file coming up, at which point you may run out of memory, or wait forever for data that's never coming, etc.
  • You have to pick some arbitrary limit, like 4GB. (Of course 640K ought to be enough for anyone…)
  • There's no way to pass additional information about each file, like the type, or name.
  • There's no way to extend the protocol in the future without making it completely incompatible.
Not all of these problems will be relevant to every use case. You can solve the first 3 by using something like netstrings. Or by using delimiters instead of headers (such as a newline, assuming you're sending text messages that can't contain newlines, or you're willing to escape your data). If you need to solve all 5, consider using something like RFC2822, the "Name: Value" format used by HTTP, email, and many other internet protocols.

Meanwhile, this is purely a data protocol. But often, you want to send commands or requests, and get back responses. For example, you might want to tell the server to "cd foo" before storing the next file. Or you may want to get a filter, and then send files matching that filter, instead of sending all of your files. Or you may want something you can build an interactive application around. This makes things more complicated, and there are many different ways to deal with the issues that arise. Look over HTTP, FTP, IMAP, SMTP, and IRC for some good examples. They're all well-known and well-documented, with both tutorials for learning and rigorous RFCs for reference. They have Python libraries to play with. They have servers and clients pre-installed or readily-available for almost every platform. And they're all relatively easy to talk with by hand, over a simple netcat connection.

Protocol implementation

There are also a few things that are less than ideal about the protocol handler above:
  • It's directly tied to sockets; you can't use it with, say, an SSL transport, or an HTTP tunnel, or a fake transport that feeds in prepared testing data.
  • It can only be used synchronously. Not a big deal for a urllib2-style client where it's appropriate to just block until the whole message is received, or even for a server that's only meant to handle a handful of simultaneous connections (just spawn a thread or child process for each one), but if you want to play a video as you receive it, or handle 10000 simultaneous clients, this is a problem.
  • Receiving 4 bytes is a pretty slow way to use sockets. Also, trying to receive exactly what you want makes it easy to accidentally run into the same bug you started with, and not notice it until you try a larger file or communicating across the internet. So, you generally want to receive some multiple of 4K at a time, append onto a buffer, and pull messages out of the buffer.
Dealing with these problems can be complicated. For learning purposes, it's worth building a transport-independent protocol and a couple of transports from scratch, or an asynchronous server directly on top of select.select, etc. But for practical development, you don't want to do that.

That's why there are networking frameworks that do all the hard stuff for you, so you only have to write the interesting parts that are relevant to your application. In Python 3.4 and later, there's a framework built in to the standard library called asyncio (and, for 3.3, a backport on PyPI). If you're using an older version, you only have asyncore and asynchat, and you don't want to use those; instead, you probably want to install and use something like Twisted or Monocle. There will be a bit of a learning curve, but it's worth it.

If you're building on top of a higher-level protocol like HTTP or JSON-RPC, you can write even less code by using a higher-level framework. The standard library has clients for a number of major protocols, and servers for some. But it doesn't handle everything (e.g., JSON-RPC), and you still may want to reach for third-party libraries in some cases (e.g., for client-side HTTP, Requests is a lot easier to use than urllib2 for anything but the most trivial or most complex cases). And sometimes, the right answer is to go even higher-level and build a web service instead of a protocol, building on WSGI, or a server framework like Django.
9

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.