Never call readlines() on a file

Calling readlines() makes your code slower, less explicit, less concise, for absolutely no benefit.

There are hundreds of questions on places like StackOverflow about the readlines method, and in every case, the answer is the same.
"My code is takes forever before it even gets started, but it's pretty fast once it gets going."
That's because you're calling readlines.
"My code seems to be worse than linear on the size of the input, even though it's just a simple loop."
That's because you're calling readlines.
"My code can't handle giant files because it runs out of memory."
That's because you're calling readlines.
"My second call to readlines returns nothing."
That's not directly because you're calling readlines; it's because you're trying to read from a file whose read pointer is at the end. But the reason it's not obvious why this can't work is that you're using readlines.

In fact, even if you don't have any of these problems, you should not use readlines, because it never gives you any advantage.

What's wrong with readlines()?

The whole point of readlines() is that it reads the entire file into memory at once and parses it into a list.

So, you can't do anything else until it's read and parsed the whole file. This is why your program takes a while to start: reading files is slow. If you let Python and your OS interleave the "waiting for the disk" part with the "running your code" part, it will get started almost immediately, and often go a lot faster overall.

And meanwhile, you're using up memory to store the whole list at once. In fact, you need enough memory to hold the original data, the strings built out of it, the list built out of those strings, and various bits of temporary storage. (Although the temporary storage goes away once readlines is done, the giant list of strings doesn't.) That's why you run out of memory.

Also, all that memory allocation takes time. If you only use a bit of memory at a time, Python can keep reusing it over and over; if you use a bunch of memory at the same time, Python has to find room for all of it, causing it to call malloc more often. You're also making Python fight with your OS's disk cache. And if it you allocate too much, you can cause your system to start swapping. That's why your time seems superlinear—it's actually linear, except for a few cliffs that you fall off along the way: needing to malloc, needing to swap, etc. And those transitions completely swamp everything else and make it hard (and pointless) to measure the linear part.

It can get even worse if you're calling readlines() on a file-like object that has to do some processing. For example, if you call it on the result of a gzip.open, it has to read and decompress the entire file, which means even more startup delay, even more temporary memory wasted, and even more opportunity for interleaving lost.

So what should I use?

99% of the time, the answer is to just use the file itself. As the documentation says:
Note that it’s already possible to iterate on file objects using for line in file:... without calling file.readlines().
The reason you're calling readlines is to get an iterable full of lines, right? A file is already an iterable full of lines. And it's a smart iterable, reading lines as you need them, with some clever buffering under the covers.

This following two blocks of code do almost the same thing:

    with open('foo.txt') as f:
        for line in f.readlines():
            dostuff(line)

    with open('foo.txt') as f:
        for line in f:
           dostuff(line)

Both of them call dostuff on each line in foo.txt. The only difference is that the first one reads all of foo.txt into memory before starting to loop, while the second one just reads a buffer at a time, automatically, while looping.

What if I actually need a list rather than just some arbitrary iterable?

Make a list out of it:

    with open('foo.txt') as f:
        lines = list(f)

This has exactly the same effect as calling f.readlines(), but it makes it explicit that you wanted a list, in exactly the same way you make that explicit anywhere else (e.g., calling an itertools function, or Python 3.x's map or filter).

What about calling readlines with a sizehint?

There's nothing wrong with that. It's often a useful optimization or simplification.

For example, consider this code using a multiprocessing.Pool:

    with open('foo.txt') as f:
        pool.map(func, f, chunksize=104)

It's a bit silly to break the file down into lines just to chunk them back up together. Also, this won't give you chunks of about-equal size unless your lines are of about-equal length. So, this may turn out to be a lot better:

    with open('foo.txt') as f:
        pool.map(func, iter(partial(f.readlines, 8192), []), chunksize=1)

Of course I'd probably wrap up that iterable to make it more readable—many Python programmers need to think through both partial and two-argument iter to understand them, much less both of them together. But the idea is that, instead of reading line by line and building chunks of 104 lines in hopes that will often be around 8K, we just read 8K worth of lines at a time.

What if I need to be compatible with older Python?

Files have been iterable since 2.3. That's over a decade old. That's the version that RHEL 4 came with.

If you really have to work with all of Python 2.1-2.7 (and don't mind breaking 3.x), you can use f.xreadlines instead. (Note that in 2.3+, f.xreadlines() just returns f, so there's no real harm in calling it—it's just silly to do so if you don't need to.) If you have to work with 2.0 or 1.x, you'll need to write your own custom buffer-and-splitlines code.

But you really don't. Nobody's going to be running your new script on a system from the last century.

What about that other 1% of the time?

There are various other possibilities that come up sometimes (besides the ones described above), where using the file as an iterator is not the answer:

Often you're trying to parse a CSV file or XML or something, and the right fix is to use csv or ElementTree or whatever in the first place instead of trying to do it yourself.

Sometimes, you need to call readline() in a loop (or, equivalently, use something like iter(f.readline, '')). However, this most often comes up with sys.stdin, in which case you're probably doing it wrong in the first place—maybe input is what you wanted here?

Sometimes, you really need to mmap the whole file (or, worse, use a sliding mmap window because you've got a 32-bit Python) and find the line breaks explicitly.

But in none of these cases is f.readlines() going to be any better than f. It's going to have the exact same problems, plus another problem on top.
6

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.