I've seen a number of people ask why, if you can have arbitrary-sized integers that do everything exactly, you can't do the same thing with floats, avoiding all the rounding problems that they keep running into.

If we compare the two, it should become pretty obvious at some point within the comparison.

Integers


Let's start with two integers, a=42 and b=2323, and add them. How many digits do I need? Think about how you add numbers: you line up the columns, and at worst carry one extra column. So, the answer can be as long as the bigger one, plus one more digit for carry. In other words, max(len(a), len(b)) + 1.

What if I multiply them? Again, think about how you multiply numbers: you line up the smaller number with each column of the larger number, then there's that extra digit of carry again. So, the answer can be as long as the sum of the two lengths, plus one more. In other words, len(a) + len(b) + 1.

What if I exponentiate them? Here, things get a bit tricky, but there's still a well-defined, easy-to-compute answer if you think about it, asking how many digits are in a**b is just solving for x in 10**(x-1) = a**b and then rounding up. So, log10 both sides and add one, and x = log10(a**b) + 1 = log10(a) * b + 1. Fit in your variables, and it's log10(42) * 2323 ~= 3770.808, which rounds up to 3771. Try len(str(42**2323)) and you'll get 3771.

You can come up with other fancy operations to apply to integers--factorials, gcds, whatever--but the number of digits required for the answer is always a simple, easy-to-compute function of the number of digits in the operands.*

* Except when the answer is infinite, of course. In that case, you easily compute that the answer can't be stored in any finite number of digits and use that fact appropriately--raise an exception, return a special infinity value, whatever.

Decimals


Now, let's start with two decimals, a=40 and b=.2323, and add them. How many digits do I need? Well, how many digits do the originals have? It kind of depends on how you count. But the naive way of counting says 2 and 4, and the result, 42.2323 has 6 digits. As you'd suspect, len(a) + len(b) + 1 is the answer here.

What if I multiply them? At first glance, it seems like it should be easy--our example gives us 9.7566, which has 5 digits; multiplying a by itself is the same as integers, and b by itself for 0.05396329 is just adding 4 decimal digits to 4 decimal digits, so it's still len(a) + len(b) + 1.

What if I exponentiate them? Well, now things get not tricky, but impossible. 42**.2323 is an irrational number. That means it has an infinite number of digits (in binary, or decimal, or any other integer base) to store. (It also takes an infinite amount of time to compute, unless you have an infinite-sized lookup table to help you.) In fact, most fractional powers of most numbers are irrational--2**0.5, the square root of 2, is the most famous irrational number. This means it takes an infinite number of digits to store.

And it's not just exponentiation; most of the things you want to do with real numbers--take the sine, multiply by pi, etc.--give you irrational answers. Unless you stick to nothing but addition, subtraction, multiplication, and division, you can't have exact math.

Even if all you want is addition, subtraction, multiplication, and division: a=1, b=3. How many digits do I need to divide them? Start doing some long division: 1 is smaller than 3, so that's 0. 10 has three 3s in it, so that's 0.3, with 1 left over. 10 has three 3s in it, so that's 0.3 with 1 left over. That's obviously going to continue on forever: there is no way to represent 1 / 3 in decimal without an infinite number of digits. Of course you could switch bases. For example, in base 9, 1 / 3 is 0.3. But then you need infinite digits for all kinds of things that are simple in base 10.

Fractions


If all you want actually is addition, subtraction, multiplication, and division, you're dealing with fractions, not decimals. Python's fractions.Fraction type does all of these operations with infinite precision. Of course when you go to print our the results as decimals, they may have to get truncated (otherwise, 1/3 or 1/7 would take forever to print), but that's the only limitation.

Of course if you try to throw exponentiation or sine at a Fraction, or multiply it by a float, you lose that exactness and just end up with a float.

Aren't decimals just a kind of fraction, where the denominator is 10d, where d is the number of digits after the decimal point? Yes, they are. But as soon as you, say, divide by 3, the decimal result is a fraction with an infinite denominator, as we saw above so that doesn't do you any good. If you need exact rational arithmetic, you need fractions with arbitrary denominators.

Accuracy


In real life, very few values are exact in the first place.* Your table isn't exactly 2 meters long, it's 2.00 +/- 0.005 meters.** Doing "exact" math on that 2 isn't going to do you any good. Doing error-propagating math on that 2.00, however, might.

Also, notice that a bigger number isn't necessarily more accurate than a smaller one (in fact, usually the opposite), but the simple decimal notation means it has more precision: 1300000000 has 10 digits in it, and if we want to let people know that only the first 3 are accurate, we have to write something like 1300000000 +/- 5000000. And even with commas, like 1,300,000,000 +/- 5,000,000, it's still pretty hard to see how many digits are accurate. In words, we solve that by decoupling the precision from the magnitude: 1300 million, plus or minus 5 million, puts most of the magnitude into the word "million", and lets us see the precision reasonably clearly in "1300 +/- 5". Of course at 13 billion plus or minus 5 million it falls down a bit, but it's still better than staring at the decimal representation and counting up commas and zeroes.

Scientific notation is an even better way of decoupling the precision from the magnitude. 1.3*1010 +/- 5*106 obviously has magnitude around 1010, and precision of 3-4 digits.*** And going to 1.3*1011 +/- 5*106 is just as readable. And floating-point numbers give us the same benefit.

In fact, when the measurement or rounding error is exactly half a digit, it gets even simpler: just write 1.30*1010, and it's clear that we have 3 digits of precision, and the same for 1.30*1011. And, while the float type doesn't give us this simplification, the decimal.Decimal type does. In addition to being a decimal fraction rather than a binary fraction, so you can think in powers of 10 instead of 2, it also lets you store 1.3e10 and 1.30e10 differently, to directly keep track of how much precision you want to store. It can also give you the most digits you can get out of the operation when possible--so 2*2 is 4, but 2.00*2.00 is 4.0000. That's almost always more than you want (depending on why you were multiplying 2.00 by 2.00, you probably want either 4.0 or 4.00), but you can keep the 4.0000 around as an intermediate value, which guarantees that you aren't adding any further rounding error from intermediate storage. When you perform an operation that doesn't allow that, like 2.00 ** 0.5, you have to work out for yourself how much precision you want to carry around in the intermediate value, which means you need to know how to do error propagation--but if you can work it out, decimal can let you store it.

* Actually, there are values that can be defined exactly: the counting numbers, e, pi, etc. But notice that most of the ones that aren't integers are irrational, so that doesn't help us here. But look at the symbols section for more...

** If you're going to suggest that maybe it's exactly 2.0001790308123812082 meters long: which molecule is the last molecule of the table? How do you account for the fact that even within a solid, molecules move around slowly? And what's the edge of a molecule? And, given that molecules' arms vibrate, the edge at what point in time? And how do you even pick a specific time that's exactly the same across the entire table, when relativity makes that impossible? And, even if you could pick a specific molecule at a specific time, its edge is a fuzzy cloud of electron position potential that fades out to 0.

*** The powers are 10 and 6, so it's at worst off by 4 digits. But the smaller one has a 5, while the bigger one has a 1, so it's obviously a lot less than 4 digits. To work out exactly how many digits it's off, do the logarithm-and-round trick again.

Money


Some values inherently have a precision cutoff. For example, with money, you can't have less than one cent.* In other words, they're fixed-point, rather than floating-point, values.

The decimal module can handle these for you as well. In fact, money is a major reason there's a decimal standard, and implementations of that standard in many languages' standard libraries.***

* Yes, American gas stations give prices in tenths-of-a-cent per gallon, and banks transact money in fractional cents, but unless you want to end up in Superman 3,** you can ignore that.

** And yes, I referenced Superman 3 instead of Office Space. If you're working in software in 2015 and haven't seen Office Space, I don't know what I could say that can help.

*** For some reason, people are willing to invest money in solving problems that help deal with money.

Symbols


So, how do mathematicians deal with all of this in real life? They don't. They do math symbolically, rather than numerically. The square root of 2 is just the square root of 2. And you carry it around that way throughout the entire operation. Multiply 3 * sqrt(2) and the answer is 3 * sqrt(2). But multiply sqrt(2) * sqrt(2) and you get 2, and multiply sqrt(2) * sqrt(3) and you get sqrt(6), and so on. There are simplification rules that give you exact equivalents, and you can apply these as you go along, and/or at the end, to try to get things as simple as possible. But in the end, the answer may end up being irrational, and you're just stuck with sqrt(6).

Sometimes you need a rough idea of how big that sqrt(6) is. When that happens, you know how rough you want it, and you can calculate it to that precision. To three digits, more than enough to give you a sense of scale, it's 2.45. If you need a pixel-precise graph, you can calculate it to +/- half a pixel. But the actual answer is sqrt(6), and that's what you're going to keep around (and use for further calculation).

In fact, let's think about that graph in more detail. For something simple and concrete, let's say you're graphing radii vs. circumferences of circles, measured in inches, on a 1:1 scale, to display on a 96 pixels-per-inch screen. So, a circle with radius 3" has a diameter of 18.850" +/- half a pixel. Or, if you prefer, 1810 pixels. But now, let's say your graph is interactive, and the user can zoom in on it. If you just scale that 1810 pixels up at 10:1, you get 18100 pixels. But if you stored 6*pi and recalculate it at the new zoom level, you get 18096 pixels. A difference of 4 pixels may not sound like much, but it's enough to make things look blocky and jagged. Zoom in too much more, and you're looking at the equivalent of face-censored video from Cops.

Python doesn't have anything built-in for symbolic math, but there are some great third-party libraries like SymPy that you can use.
2

View comments

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.
5

I haven't posted anything new in a couple years (partly because I attempted to move to a different blogging platform where I could write everything in markdown instead of HTML but got frustrated—which I may attempt again), but I've had a few private comments and emails on some of the old posts, so I decided to do some followups.

A couple years ago, I wrote a blog post on greenlets, threads, and processes.
6

Looking before you leap

Python is a duck-typed language, and one where you usually trust EAFP ("Easier to Ask Forgiveness than Permission") over LBYL ("Look Before You Leap"). In Java or C#, you need "interfaces" all over the place; you can't pass something to a function unless it's an instance of a type that implements that interface; in Python, as long as your object has the methods and other attributes that the function needs, no matter what type it is, everything is good.
1

Background

Currently, CPython’s internal bytecode format stores instructions with no args as 1 byte, instructions with small args as 3 bytes, and instructions with large args as 6 bytes (actually, a 3-byte EXTENDED_ARG followed by a 3-byte real instruction). While bytecode is implementation-specific, many other implementations (PyPy, MicroPython, …) use CPython’s bytecode format, or variations on it.

Python exposes as much of this as possible to user code.
6

If you want to skip all the tl;dr and cut to the chase, jump to Concrete Proposal.

Why can’t we write list.len()? Dunder methods C++ Python Locals What raises on failure? Method objects What about set and delete? Data members Namespaces Bytecode details Lookup overrides Introspection C API Concrete proposal CPython Analysis

Why can’t we write list.len()?

Python is an OO language. To reverse a list, you call lst.reverse(); to search a list for an element, you call lst.index().
8

Many people, when they first discover the heapq module, have two questions:

Why does it define a bunch of functions instead of a container type? Why don't those functions take a key or reverse parameter, like all the other sorting-related stuff in Python? Why not a type?

At the abstract level, it's often easier to think of heaps as an algorithm rather than a data structure.
1

Currently, in CPython, if you want to process bytecode, either in C or in Python, it’s pretty complicated.

The built-in peephole optimizer has to do extra work fixing up jump targets and the line-number table, and just punts on many cases because they’re too hard to deal with. PEP 511 proposes a mechanism for registering third-party (or possibly stdlib) optimizers, and they’ll all have to do the same kind of work.
3

One common "advanced question" on places like StackOverflow and python-list is "how do I dynamically create a function/method/class/whatever"? The standard answer is: first, some caveats about why you probably don't want to do that, and then an explanation of the various ways to do it when you really do need to.

But really, creating functions, methods, classes, etc. in Python is always already dynamic.

Some cases of "I need a dynamic function" are just "Yeah? And you've already got one".
1

A few years ago, Cesare di Mauro created a project called WPython, a fork of CPython 2.6.4 that “brings many optimizations and refactorings”. The starting point of the project was replacing the bytecode with “wordcode”. However, there were a number of other changes on top of it.

I believe it’s possible that replacing the bytecode with wordcode would be useful on its own.
1

Many languages have a for-each loop. In some, like Python, it’s the only kind of for loop:

for i in range(10): print(i) In most languages, the loop variable is only in scope within the code controlled by the for loop,[1] except in languages that don’t have granular scopes at all, like Python.[2]

So, is that i a variable that gets updated each time through the loop or is it a new constant that gets defined each time through the loop?

Almost every language treats it as a reused variable.
4
Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.