There are three commonly useful ways to read files: Read the whole thing into memory, iterate them element by element (usually meaning lines), or iterate them in chunks.

Python makes the first one dead simple, the third one maybe unnecessarily hard, and the second one dead simple if you mean lines, but hard otherwise.

How to do each one

Read the whole thing

You can't get much simpler than this:
with open(path) as f:
    contents = f.read()
dostuff(contents)
Or, if you're dealing with a binary file:
with open(path, 'rb') as f:
    contents = f.read()
dostuff(contents)
From here on out, I won't distinguish between binary and text files, because the difference is always the same: open in 'rb' mode instead of the default 'r', and you get bytes instead of str.

At any rate, either way, you get the entire contents of the file as a single str or bytes in just two lines.

Iterate lines

This is almost as simple:
with open(path) as f:
    for line in f:
        dostuff(line)
Lines don't make as much sense for most binary files—but when they do, it's just as simple; pass the 'rb' mode, and each line is a bytes instead of a str.

Iterate chunks

This one is simple for Python experts, but not exactly easy to explain to novices:
with open(path, 'rb') as f:
    for chunk in iter(lambda: f.read(4096), b''):
        dostuff(chunk)
In order to understand this, you have to know the two-argument form of the iter function, how to wrap an expression in a lambda, and that file objects' read method returns an empty bytes (or str, for non-binary files) at EOF. If you didn't know all that, you'd have to write something like this:
with open(path, 'rb') as f:
    while True:
        chunk = f.read(4096)
        if not chunk:
            break
        dostuff(chunk)
Fortunately, you can wrap things up in a function, so at least its uses are easy to understand, even if its implementation isn't:
def chunk_file(f, chunksize=4096):
    return iter(lambda: f.read(chunksize), b'')

Iterate non-line elements

This one is even trickier. To do the same thing file objects magically do to split on lines, you have to write most of the magic manually: accumulate a buffer between read calls, and split that buffer yourself. Like this:
def resplit(buffers, separator):
    buf = type(separator)()
    for buffer in buffers:
        buf += buffer
        chunks = buf.split(separator)
        yield from chunks[:-1]
        buf = chunks[-1]
    if buf:
        yield buf

with open(path, 'rb') as f:
    buffers = chunk_file(f)
    for element in resplit(buffers, b'\0'):
        dostuff(element)
At least resplit is reusable. But no novice is going to be able to write that. Instead, they'd copy and paste code like this:
with open(path, 'rb') as f:
    buf = b''
    while True:
        chunk = f.read(4096)
        if not chunk:
            break
        buf += chunk
        elements = buf.split(b'\0')
        for element in elements[:-1]:
            dostuff(element)
        buf = elements[-1]
    if buf:
        dostuff(buf)
Except, of course, that they'll get all kinds of things wrong, and then have to go back and edit all the copy-pasted copies when they find the bug.

Also, notice that most file-like objects (including actual files) already have their own buffer. Throwing another buffer in front of them obviously hurts performance. Less obviously, it also means the file position is ahead of what you've iterated so far (because you've got extra stuff sitting around in your local buf, or the one hidden inside the generator state), so if you wanted to, say, use f.tell() for a progress bar, it wouldn't be accurate.

But what alternative do you have? Obviously you can read one byte at a time; then, you can be sure that each time you've read a line, the file pointer is right at the end of that line. But that's likely to be slow (and especially so with unbuffered raw files).

Binary file objects have a peek method that can help here—instead of read(4096), you just do peek(). If you get anything back, you search for the last separator in the peeked buffer, and, if found, you read that much and split; if not found, you read the length of the peeked buffer and stash it. That's how readline works under the covers on binary files, at least as of CPython 3.4. But that doesn't work for text files, or file-like objects that aren't actual binary files, etc.

Could this be improved?

Novices have to write code like this all the time, and they aren't going to know how to do it. Is there a way Python could make it easier?

New file methods

One possibility is to add new methods to file objects. The most important one is a readuntil method (or, alternatively, a new sep parameter for readline, or maybe even a sep parameter for the __init__ method or open function), because you'll need that to get around all of the insurmountable problems with iterating non-line-based elements. But methods like iterchunks and iterlines would also be helpful (and, with some implementations, could be more efficient than what you'd write yourself).

For a brand-new language, that would almost certainly be the answer. But for Python? There are thousands of existing file-like classes, many of which do not use the io module to do their job.

If it were easier to wrap file-like objects in io objects, that would be a different story. But it's not. If you've got an object with a read method that returns bytes (what "file-like object" usually means for binary files), that's not something you can wrap in a BufferedReader; you'll need to first create a RawIOBase subclass that delegates to your file-like object and implements readinto, so you can wrap that wrapper a BufferedReader. If you've got an object with a readline method (or __iter__) that returns str (what "file-like object" often means for text files), there's no way at all to wrap a text file with as a TextFileWrapper, except by wrapping it in a BufferedReader that fakes an encoding that you can "decode" cheaply just so you can wrap that up.

Meanwhile, people don't usually want to modify key parts of the standard library until there's significant experience with a third-party module on PyPI. But, at least as of 3.4, there's really no way to write such a module. A readuntil function for binary files is easy (it can call peek if present, go byte by byte if not, just like readline already does), but there's nothing equivalent for text files. (If you're using the _pyio implementation, you can call _get_decoded_chars and _set_decoded_chars to access the internal buffer, but needless to say the builtin C implementation in CPython that you're actually using doesn't expose those internal private methods.) Also, it's not easy to wrap or monkeypatch classes built in C that are returned directly by builtin functions all over the stdlib.

New generic functions

Another option would be to include the chunk_file and split_buffers functions (possibly with better names that I can't think of without more coffee…) in the stdlib, possibly even as builtins. This might not be a bad idea.

Files as an iterable of bytes

If you could treat files as iterables of individual characters (or bytes), these functions would become a lot easier to write… but then they'd also become a lot less efficient if it means going back to the file object (or, worse, the actual file) for each character. It might be worth writing an iterbytes()/iterchars() method or helper (but remember that you can already do it with chunk_file(f, 1) or, if you prefer better, chain.from_iterable(chunk_file(f, 4096)), which seems too trivial for the stdlib, and that it'll have all the same problems as the two ideas above…), but it's not a solution.

One way to improve on that would be to provide some way for files to act like a sliceable sequence of characters or bytes—or, even better, as a buffer—rather than just an iterable. The mmap module in the stdlib already provides a (not-quite-novice-friendly, but not too bad) way to do this for binary files, but it's not much help for text files, or file-like objects that aren't OS files, or OS files that aren't regular files (like sockets), or files that are too big for your VM space (a problem for anyone on 32-bit systems), etc.
2

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.