A lot of people have questions like this:
I've got a 100MB CSV file. I read it in to a list, populated with a csv.DictReader, and my computer starts swapping. Why?
Let's look at what it takes to store a 100MB file as a list of dicts of strings.

But first, let's look at how to solve this problem, instead of how to understand it.

Very often, you don't actually need the whole thing in memory; you're just, e.g., reading the rows from one CSV file, processing each one independently, and writing out the result to a different CSV file. You can do that by using the csv.DictReader as an iterator—for each row in the reader, write a row to the writer, without storing it anywhere. Then you only need enough memory for one row at a time.

Even if you do more global work (where one row's processing may depend on another row far away), do you really need this all in memory, or just in some accessible form? If you read the CSV and use it to iteratively build a database (whether a SQL database, a dbm, or your favorite kind of NoSQL store), you can get reasonably fast random access with bounded memory usage.

Where does the memory go?

First, let's make it specific. We've got 200K rows of 25 columns of an average of 20 characters. All of the characters are ASCII. We're using 64-bit CPython 3.4. (I'll explain how changing each of these assumptions can change things as I go along.)

Reading the file into memory

If you just open the file and write contents = f.read(), you get a 100MB string. That takes 100MB of memory for the string storage, plus 48 bytes for a str header, plus possibly 1 byte for a NUL terminator. So, still effectively just 100MB.

How do I know this? Well, you can read the source, but it's easier to just call sys.getsizeof on an empty string.

This only works because we're using CPython 3.3+ with all-ASCII data. Python 3.x strings are Unicode. In CPython 3.0-3.2, that means either 2 or 4 bytes per character, depending on your build, which would mean 200MB or 400MB. In 3.3 and later, strings are one byte per character if they're pure ASCII, two if they're pure BMP, and four otherwise. Of course you can stay with un-decoded bytes objects instead of strings—and in Python 2.7, that's often what you'd do.

Splitting into lines

If you read in a list of lines, with contents = list(f), you get 200K strings, each of which has that 48-byte header on top of its 500-byte string data. So, that's an extra 9.2MB.

Plus, the list itself has a 48-byte header, and it also needs to store references to all of those strings. In CPython, each of those references is a pointer, so in 64-bit-land, that's 8 bytes. Also, lists have extra slack on the end. (Why? Expanding a list requires allocating a new chunk of memory, copying all the old values over, and freeing the old chunk. That's expensive. If you did that for every append, lists would be too slow to use. But if you leave room for extra values on each expansion—with the extra room proportional to the current size—the reallocations and copies amortize out to essentially free. The cost is a bit of wasted space.) Let's assume it's 220K pointers; that's an extra 1.7MB.

Splitting into columns

If you split each line into 25 columns, whether by using contents = [line.split(',') for line in f] or by using contents = list(csv.reader(f)), you get 200K * 25 strings, each of which has that 48-byte header, plus 200K lists, each with a 48-byte header plus a bit over 25 pointers, plus of course the original list of 220K pointers. Now we're talking about 281MB.

Storing in dicts

If you convert each line into a dict instead of a list, e.g., by using contents = list(csv.DictReader(f)), you've got the same strings. (You don't get a bunch of separate copies of the key strings; each dict just references the same strings.)

The only difference is that instead of 200K lists, you have 200K dicts. A dict is bigger than a list for three reasons. First, each slot in a dict has to hold a hash value, a key pointer, and a value pointer, instead of just a value pointer. Seconds, dicts are more complicated, so they have bigger headers. Finally, dicts need more slack than lists (because they're more expensive to copy when they expand, because you have to reprobe every hash instead of just copying a big buffer of memory, and because hash tables work best with specific types of numbers for their capacity).

So, the 200K * (48 + 25 * 8) part is unchanged, but the 200K * (48 + 27 * 8) turns into 200K * (96 + 32 * 24). Now we're talking 624MB.

Couldn't Python do better?

One obvious thing Python could so is to store each dict as two pieces: a hash table full of keys and their hashes, which would be shared by all those 200K rows, plus a hash table full of just values, which would be separate for each row. CPython 3.4+ actually has the ability to do this, but it only gets triggered in special cases, like the attribute dictionaries of objects of the same type. (And yes, this means that if you replace that list of dicts with a list of instances of some class, you will actually save some memory.)

You could take this even farther by using a shared hash table full of keys, hashes, and indices, and then each separate dict would just have an array of values, without the need for that extra slack. PyPy 2.0+ can do this, although I'm not sure exactly where it get triggered (except that I am sure that the class trick would work here, and work even better than in CPython).

Beyond that, is there a way we could get rid of the "boxing", the need to store both the strings and the array full of pointers to those strings?

Well, if all of your column values are about the same length, then you could store the strings directly in the hash table (or array). See fixedhash for a quick&dirty implementation of a hash table with fixed-size keys. But typically, a CSV file has some columns that are only 2 or 3 characters long, and others than are 50 characters or longer, so you'd be wasting more space than you'd be saving.

Failing that, you could replace the 8-byte strings with small integer indices into a giant string. How small could this be? Well, your string is 100MB long, so that takes 27 bits. If your longest column value is 64 characters long, you need another 7 bits for the length. So that's 34 bits. Accessing arrays by bits is slow and complicated, but by bytes can be reasonable, which means you can use 5 bytes instead of 8 for each string. This cuts 38% out of the cost of one of the smaller but still significant parts of your storage.

But ultimately, you're talking about having 5 million objects alive. That's going to take a good chunk of memory to store. The only way to avoid that is to not store all those objects. 
1

View comments

  1. Thanks for sharing this post. Your post is really very helpful its students. python Online Training

    ReplyDelete
Hybrid Programming
Hybrid Programming
5
Greenlets vs. explicit coroutines
Greenlets vs. explicit coroutines
6
ABCs: What are they good for?
ABCs: What are they good for?
1
A standard assembly format for Python bytecode
A standard assembly format for Python bytecode
6
Unified call syntax
Unified call syntax
8
Why heapq isn't a type
Why heapq isn't a type
1
Unpacked Bytecode
Unpacked Bytecode
3
Everything is dynamic
Everything is dynamic
1
Wordcode
Wordcode
1
For-each loops should define a new variable
For-each loops should define a new variable
4
Views instead of iterators
Views instead of iterators
2
How lookup _could_ work
How lookup _could_ work
2
How lookup works
How lookup works
7
How functions work
How functions work
2
Why you can't have exact decimal math
Why you can't have exact decimal math
2
Can you customize method resolution order?
Can you customize method resolution order?
1
Prototype inheritance is inheritance
Prototype inheritance is inheritance
1
Pattern matching again
Pattern matching again
The best collections library design?
The best collections library design?
1
Leaks into the Enclosing Scope
Leaks into the Enclosing Scope
2
Iterable Terminology
Iterable Terminology
8
Creating a new sequence type is easy
Creating a new sequence type is easy
2
Going faster with NumPy
Going faster with NumPy
2
Why isn't asyncio too slow?
Why isn't asyncio too slow?
Hacking Python without hacking Python
Hacking Python without hacking Python
1
How to detect a valid integer literal
How to detect a valid integer literal
2
Operator sectioning for Python
Operator sectioning for Python
1
If you don't like exceptions, you don't like Python
If you don't like exceptions, you don't like Python
2
Spam, spam, spam, gouda, spam, and tulips
Spam, spam, spam, gouda, spam, and tulips
And now for something completely stupid…
And now for something completely stupid…
How not to overuse lambda
How not to overuse lambda
1
Why following idioms matters
Why following idioms matters
1
Cloning generators
Cloning generators
5
What belongs in the stdlib?
What belongs in the stdlib?
3
Augmented Assignments (a += b)
Augmented Assignments (a += b)
11
Statements and Expressions
Statements and Expressions
3
An Abbreviated Table of binary64 Values
An Abbreviated Table of binary64 Values
1
IEEE Floats and Python
IEEE Floats and Python
Subtyping and Ducks
Subtyping and Ducks
1
Greenlets, threads, and processes
Greenlets, threads, and processes
6
Why don't you want getters and setters?
Why don't you want getters and setters?
8
The (Updated) Truth About Unicode in Python
The (Updated) Truth About Unicode in Python
1
How do I make a recursive function iterative?
How do I make a recursive function iterative?
1
Sockets and multiprocessing
Sockets and multiprocessing
Micro-optimization and Python
Micro-optimization and Python
3
Why does my 100MB file take 1GB of memory?
Why does my 100MB file take 1GB of memory?
1
How to edit a file in-place
How to edit a file in-place
ADTs for Python
ADTs for Python
5
A pattern-matching case statement for Python
A pattern-matching case statement for Python
2
How strongly typed is Python?
How strongly typed is Python?
How do comprehensions work?
How do comprehensions work?
1
Reverse dictionary lookup and more, on beyond z
Reverse dictionary lookup and more, on beyond z
2
How to handle exceptions
How to handle exceptions
2
Three ways to read files
Three ways to read files
2
Lazy Python lists
Lazy Python lists
2
Lazy cons lists
Lazy cons lists
1
Lazy tuple unpacking
Lazy tuple unpacking
3
Getting atomic writes right
Getting atomic writes right
Suites, scopes, and lifetimes
Suites, scopes, and lifetimes
1
Swift-style map and filter views
Swift-style map and filter views
1
Inline (bytecode) assembly
Inline (bytecode) assembly
Why Python (or any decent language) doesn't need blocks
Why Python (or any decent language) doesn't need blocks
18
SortedContainers
SortedContainers
1
Fixing lambda
Fixing lambda
2
Arguments and parameters, under the covers
Arguments and parameters, under the covers
pip, extension modules, and distro packages
pip, extension modules, and distro packages
Python doesn't have encapsulation?
Python doesn't have encapsulation?
3
Grouping into runs of adjacent values
Grouping into runs of adjacent values
dbm: not just for Unix
dbm: not just for Unix
How to use your self
How to use your self
1
Tkinter validation
Tkinter validation
7
What's the deal with ttk.Frame.__init__(self, parent)
What's the deal with ttk.Frame.__init__(self, parent)
1
Does Python pass by value, or by reference?
Does Python pass by value, or by reference?
9
"if not exists" definitions
"if not exists" definitions
repr + eval = bad idea
repr + eval = bad idea
1
Solving callbacks for Python GUIs
Solving callbacks for Python GUIs
Why your GUI app freezes
Why your GUI app freezes
21
Using python.org binary installations with Xcode 5
Using python.org binary installations with Xcode 5
defaultdict vs. setdefault
defaultdict vs. setdefault
1
Lazy restartable iteration
Lazy restartable iteration
2
Arguments and parameters
Arguments and parameters
3
How grouper works
How grouper works
1
Comprehensions vs. map
Comprehensions vs. map
2
Basic thread pools
Basic thread pools
Sorted collections in the stdlib
Sorted collections in the stdlib
4
Mac environment variables
Mac environment variables
Syntactic takewhile?
Syntactic takewhile?
4
Can you optimize list(genexp)
Can you optimize list(genexp)
MISRA-C and Python
MISRA-C and Python
1
How to split your program in two
How to split your program in two
How methods work
How methods work
3
readlines considered silly
readlines considered silly
6
Comprehensions for dummies
Comprehensions for dummies
Sockets are byte streams, not message streams
Sockets are byte streams, not message streams
9
Why you don't want to dynamically create variables
Why you don't want to dynamically create variables
7
Why eval/exec is bad
Why eval/exec is bad
Iterator Pipelines
Iterator Pipelines
2
Why are non-mutating algorithms simpler to write in Python?
Why are non-mutating algorithms simpler to write in Python?
2
Sticking with Apple's Python 2.7
Sticking with Apple's Python 2.7
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.