A lot of people have questions like this:
I've got a 100MB CSV file. I read it in to a list, populated with a csv.DictReader, and my computer starts swapping. Why?
Let's look at what it takes to store a 100MB file as a list of dicts of strings.

But first, let's look at how to solve this problem, instead of how to understand it.

Very often, you don't actually need the whole thing in memory; you're just, e.g., reading the rows from one CSV file, processing each one independently, and writing out the result to a different CSV file. You can do that by using the csv.DictReader as an iterator—for each row in the reader, write a row to the writer, without storing it anywhere. Then you only need enough memory for one row at a time.

Even if you do more global work (where one row's processing may depend on another row far away), do you really need this all in memory, or just in some accessible form? If you read the CSV and use it to iteratively build a database (whether a SQL database, a dbm, or your favorite kind of NoSQL store), you can get reasonably fast random access with bounded memory usage.

Where does the memory go?

First, let's make it specific. We've got 200K rows of 25 columns of an average of 20 characters. All of the characters are ASCII. We're using 64-bit CPython 3.4. (I'll explain how changing each of these assumptions can change things as I go along.)

Reading the file into memory

If you just open the file and write contents = f.read(), you get a 100MB string. That takes 100MB of memory for the string storage, plus 48 bytes for a str header, plus possibly 1 byte for a NUL terminator. So, still effectively just 100MB.

How do I know this? Well, you can read the source, but it's easier to just call sys.getsizeof on an empty string.

This only works because we're using CPython 3.3+ with all-ASCII data. Python 3.x strings are Unicode. In CPython 3.0-3.2, that means either 2 or 4 bytes per character, depending on your build, which would mean 200MB or 400MB. In 3.3 and later, strings are one byte per character if they're pure ASCII, two if they're pure BMP, and four otherwise. Of course you can stay with un-decoded bytes objects instead of strings—and in Python 2.7, that's often what you'd do.

Splitting into lines

If you read in a list of lines, with contents = list(f), you get 200K strings, each of which has that 48-byte header on top of its 500-byte string data. So, that's an extra 9.2MB.

Plus, the list itself has a 48-byte header, and it also needs to store references to all of those strings. In CPython, each of those references is a pointer, so in 64-bit-land, that's 8 bytes. Also, lists have extra slack on the end. (Why? Expanding a list requires allocating a new chunk of memory, copying all the old values over, and freeing the old chunk. That's expensive. If you did that for every append, lists would be too slow to use. But if you leave room for extra values on each expansion—with the extra room proportional to the current size—the reallocations and copies amortize out to essentially free. The cost is a bit of wasted space.) Let's assume it's 220K pointers; that's an extra 1.7MB.

Splitting into columns

If you split each line into 25 columns, whether by using contents = [line.split(',') for line in f] or by using contents = list(csv.reader(f)), you get 200K * 25 strings, each of which has that 48-byte header, plus 200K lists, each with a 48-byte header plus a bit over 25 pointers, plus of course the original list of 220K pointers. Now we're talking about 281MB.

Storing in dicts

If you convert each line into a dict instead of a list, e.g., by using contents = list(csv.DictReader(f)), you've got the same strings. (You don't get a bunch of separate copies of the key strings; each dict just references the same strings.)

The only difference is that instead of 200K lists, you have 200K dicts. A dict is bigger than a list for three reasons. First, each slot in a dict has to hold a hash value, a key pointer, and a value pointer, instead of just a value pointer. Seconds, dicts are more complicated, so they have bigger headers. Finally, dicts need more slack than lists (because they're more expensive to copy when they expand, because you have to reprobe every hash instead of just copying a big buffer of memory, and because hash tables work best with specific types of numbers for their capacity).

So, the 200K * (48 + 25 * 8) part is unchanged, but the 200K * (48 + 27 * 8) turns into 200K * (96 + 32 * 24). Now we're talking 624MB.

Couldn't Python do better?

One obvious thing Python could so is to store each dict as two pieces: a hash table full of keys and their hashes, which would be shared by all those 200K rows, plus a hash table full of just values, which would be separate for each row. CPython 3.4+ actually has the ability to do this, but it only gets triggered in special cases, like the attribute dictionaries of objects of the same type. (And yes, this means that if you replace that list of dicts with a list of instances of some class, you will actually save some memory.)

You could take this even farther by using a shared hash table full of keys, hashes, and indices, and then each separate dict would just have an array of values, without the need for that extra slack. PyPy 2.0+ can do this, although I'm not sure exactly where it get triggered (except that I am sure that the class trick would work here, and work even better than in CPython).

Beyond that, is there a way we could get rid of the "boxing", the need to store both the strings and the array full of pointers to those strings?

Well, if all of your column values are about the same length, then you could store the strings directly in the hash table (or array). See fixedhash for a quick&dirty implementation of a hash table with fixed-size keys. But typically, a CSV file has some columns that are only 2 or 3 characters long, and others than are 50 characters or longer, so you'd be wasting more space than you'd be saving.

Failing that, you could replace the 8-byte strings with small integer indices into a giant string. How small could this be? Well, your string is 100MB long, so that takes 27 bits. If your longest column value is 64 characters long, you need another 7 bits for the length. So that's 34 bits. Accessing arrays by bits is slow and complicated, but by bytes can be reasonable, which means you can use 5 bytes instead of 8 for each string. This cuts 38% out of the cost of one of the smaller but still significant parts of your storage.

But ultimately, you're talking about having 5 million objects alive. That's going to take a good chunk of memory to store. The only way to avoid that is to not store all those objects. 
1

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.