Weak typing

A language with weak typing is one where programs can escape the type system. Another way to describe it is that values can change types.

For example, in C, I can create a pointer to characters, then tell the compiler I want to use it as a pointer to integers:
    char sz[] = "abcdefg";
    int *i = (int *)sz;
On a little-endian platform with 32-bit integers, this makes i into an array of the numbers 0x64636261 and 0x00676665. In fact, you can even cast pointers themselves to integers (of the appropriate size), or vice-versa:
    char *spam = (char *)0x12345678
    spam[0] = 0;
And this means I can overwrite memory anywhere in the system.*

* Of course modern OS's use virtual memory and page protection so I can only overwrite my own process's memory, but there's nothing about C itself that offers such protection, as anyone who ever coded on, say, Classic Mac OS or Win16 can tell you.

Traditional Lisp allowed similar kinds of hackery; on some platforms, double-word floats and cons cells were the same type, and you could just pass one to a function expecting the other and it would "work".

Most languages today aren't quite as weak as C and Lisp were, but many of them are still somewhat leaky. For example, any OO language that has an unchecked "downcast",* that's a type leak: you're essentially telling the compiler "I know I didn't give you enough information to know this is safe, but I'm pretty sure it is," when the whole point of a type system is that the compiler always has enough information to know what's safe.

* A checked downcast doesn't make the language's type system any weaker just because it moves the check to runtime. If it did, then subtype polymorphism (aka virtual or fully-dynamic function calls) would be the same violation of the type system, and I don't think anyone wants to say that.

Very few "scripting" languages are weak in this sense. Even in Perl or Tcl, you can't take a string and just interpret its bytes as an integer.* But it's worth noting that in CPython (and similarly for many other interpreters for many languages), if you're really persistent, you can use `ctypes` to load up `libpython`, cast an object's `id` to a `POINTER(Py_Object)`, and force the type system to leak. Whether this makes the type system weak or not depends on your use cases—if you're trying to implement an in-language restricted execution sandbox to ensure security, you do have to deal with these kinds of escapes…

* You can use a function like struct.unpack to read the bytes and build a new int out of "how C would represent these bytes", but that's obviously not leaky; even Haskell allows that.

Implicit conversion

Implicit conversion is really a different thing from a weak or leaky type system.

Every language, even Haskell, has functions to, say, convert an integer to a string or a float. But some languages will do some of those conversions for you automatically—e.g., in C, if you call a function that wants a float, and you pass it in int, it gets converted for you. This can definitely lead to bugs with, e.g., unexpected overflows, but they're not the same kinds of bugs you get from a weak type system.

C isn't really being any weaker here; you can add an int and a float in Haskell, or even concatenate a float to a string, you just have to do it more explicitly.

Or, using the alternate definition, the value's type isn't changing from int to float, the program is creating a new float by calling a conversion function on the int, but the original int is still sitting around in whatever variable (or temporary location) it was always in.

With dynamic languages, the whole idea of implicit conversion is pretty murky. There's no such thing as "a function that wants a float" in Python or Perl. But there are overloaded functions that do different things with different types, and there's a strong intuitive sense that, e.g., adding a string to something else is "a function that wants a string".

In that sense, Perl, Tcl, and JavaScript appear to do a lot of implicit conversions ("a" + 1 gives you "a1"), while Python does a lot fewer ("a" + 1 raises an exception, but 1.0 + 1 does give you 2.0*).

* Actually, in modern Python, that can be explained in terms of OO subtyping, since `isinstance(2, numbers.Real)` is true. I don't think there's any sense in which `2` is an instance of the string type in Perl or JavaScript… although in Tcl, it actually is, since _everything_ is an instance of string.

But it's pretty hard to put that into anything beyond a loose, intuitive sense. Why shouldn't there be a + function that takes a string and an int? Obviously (string, int)->string is a perfectly valid type for a function to have, since it's the type of, e.g., the index function.

Another meaning for strength

There's another, completely orthogonal, definition of "strong" vs. "weak" typing, where "strong" means powerful/flexible/expressive.

For example, Haskell lets you define a type that's a number, a string, a list of this type, or a map from strings to this type, which is a perfect way to represent anything that can be decoded from JSON. There's no way to define such a type in Java. But at least Java has parametric (generic) types, so you can write a function that takes a List of T and know that the elements are of type T; other languages, like early Java, forced you to use a List of Object and downcast. But at least Java lets you create new types with their own methods; C only lets you create structures. And BCPL didn't even have that. And so on down to assembly, where the only types are different bit lengths.

So, in that sense, Haskell's type system is stronger than modern Java's, which is stronger than earlier Java's, which is stronger than C's, which is stronger than BCPL's.

So, where does Python fit into that spectrum? Again, dynamic types make this murky.

In many cases, duck typing allows you to simulate everything you can do in Haskell, and even some things you can't; sure, errors are caught at runtime instead of compile time, but they're still caught. However, there are cases where duck typing isn't sufficient.

For example, in Haskell, if you have a doubly-optional value (e.g., from calling zip_longest* on a list of optional values), you can distinguish None (filled in by zip_longest) from Just(None) (an actual None value from the original list). In Python, they're both None—that's why zip_longest needs a fillvalue parameter, so you can pass an alternate sentinel.

* I'm using Python terminology and syntax instead of Haskell here, so people who don't know Haskell can understand. I assume people who do know both languages can figure out what I mean.

For another example, in Haskell, the concept of an empty list of integers makes perfect sense; in Python, an empty list is an empty list. So, in Haskell, you could decide that reducing + over that list should return 0*; in Python, you can't.

* In fact, Haskell doesn't let you do this; if you call the reduce function with no start value on an empty list, you get an error. But its type system is powerful enough that you _could_ make this work, and Python's isn't.

On top of that, in many cases, often arguable whether Python really is simulating a stronger type system. For example, in Haskell, if you have a function that returns either a string or nothing, it returns Maybe String; to access the string, you have to use Just on it, even after testing for Nothing. Normally, you do this with pattern matching:
    case spam:
        of None:
            print('I got nothing')
        of Just(s):
            print('I got {}'.format(s))
In Python, the same function would return either None, or a str; to access the str, you just access it directly after testing for None, no need to pull it out of a Just. So, does this mean Python is simulating disjunctive types, and its if not None is doing implicit lowering? On the one hand, you get effectively the same error in Python trying to call, e.g., len(spam) if spam is None as you'd get in Haskell. On the other hand, if you call, say, __sizeof__ or __lt__ without testing for None, it will work in Python, and the equivalent would definitely not work in Haskell.

I think this is a matter of degrees; the cases where Python gets it "wrong" according to Haskell are just more examples of where duck typing isn't as powerful as Haskell's type system. (And, conversely, the places where Python gets it "right" are examples of where Python is more concise and readable without giving up any power.)
0

Add a comment

It's been more than a decade since Typical Programmer Greg Jorgensen taught the word about Abject-Oriented Programming.

Much of what he said still applies, but other things have changed. Languages in the Abject-Oriented space have been borrowing ideas from another paradigm entirely—and then everyone realized that languages like Python, Ruby, and JavaScript had been doing it for years and just hadn't noticed (because these languages do not require you to declare what you're doing, or even to know what you're doing). Meanwhile, new hybrid languages borrow freely from both paradigms.

This other paradigm—which is actually older, but was largely constrained to university basements until recent years—is called Functional Addiction.

A Functional Addict is someone who regularly gets higher-order—sometimes they may even exhibit dependent types—but still manages to retain a job.

Retaining a job is of course the goal of all programming. This is why some of these new hybrid languages, like Rust, check all borrowing, from both paradigms, so extensively that you can make regular progress for months without ever successfully compiling your code, and your managers will appreciate that progress. After all, once it does compile, it will definitely work.

Closures

It's long been known that Closures are dual to Encapsulation.

As Abject-Oriented Programming explained, Encapsulation involves making all of your variables public, and ideally global, to let the rest of the code decide what should and shouldn't be private.

Closures, by contrast, are a way of referring to variables from outer scopes. And there is no scope more outer than global.

Immutability

One of the reasons Functional Addiction has become popular in recent years is that to truly take advantage of multi-core systems, you need immutable data, sometimes also called persistent data.

Instead of mutating a function to fix a bug, you should always make a new copy of that function. For example:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

When you discover that you actually wanted fields 2 and 3 rather than 1 and 2, it might be tempting to mutate the state of this function. But doing so is dangerous. The right answer is to make a copy, and then try to remember to use the copy instead of the original:

function getCustName(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

function getCustName2(custID)
{
    custRec = readFromDB("customer", custID);
    fullname = custRec[2] + ' ' + custRec[3];
    return fullname;
}

This means anyone still using the original function can continue to reference the old code, but as soon as it's no longer needed, it will be automatically garbage collected. (Automatic garbage collection isn't free, but it can be outsourced cheaply.)

Higher-Order Functions

In traditional Abject-Oriented Programming, you are required to give each function a name. But over time, the name of the function may drift away from what it actually does, making it as misleading as comments. Experience has shown that people will only keep once copy of their information up to date, and the CHANGES.TXT file is the right place for that.

Higher-Order Functions can solve this problem:

function []Functions = [
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[1] + ' ' + custRec[2];
        return fullname;
    },
    lambda(custID) {
        custRec = readFromDB("customer", custID);
        fullname = custRec[2] + ' ' + custRec[3];
        return fullname;
    },
]

Now you can refer to this functions by order, so there's no need for names.

Parametric Polymorphism

Traditional languages offer Abject-Oriented Polymorphism and Ad-Hoc Polymorphism (also known as Overloading), but better languages also offer Parametric Polymorphism.

The key to Parametric Polymorphism is that the type of the output can be determined from the type of the inputs via Algebra. For example:

function getCustData(custId, x)
{
    if (x == int(x)) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return int(fullname);
    } else if (x.real == 0) {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return double(fullname);
    } else {
        custRec = readFromDB("customer", custId);
        fullname = custRec[1] + ' ' + custRec[2];
        return complex(fullname);
    }
}

Notice that we've called the variable x. This is how you know you're using Algebraic Data Types. The names y, z, and sometimes w are also Algebraic.

Type Inference

Languages that enable Functional Addiction often feature Type Inference. This means that the compiler can infer your typing without you having to be explicit:


function getCustName(custID)
{
    // WARNING: Make sure the DB is locked here or
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

We didn't specify what will happen if the DB is not locked. And that's fine, because the compiler will figure it out and insert code that corrupts the data, without us needing to tell it to!

By contrast, most Abject-Oriented languages are either nominally typed—meaning that you give names to all of your types instead of meanings—or dynamically typed—meaning that your variables are all unique individuals that can accomplish anything if they try.

Memoization

Memoization means caching the results of a function call:

function getCustName(custID)
{
    if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Non-Strictness

Non-Strictness is often confused with Laziness, but in fact Laziness is just one kind of Non-Strictness. Here's an example that compares two different forms of Non-Strictness:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRate(custId) {}

function callByNameTextRate(custId)
{
    /****************************************
    *
    * TO DO:
    *
    * get tax rate for the customer state
    * eventually from some table
    *
    ****************************************/
}

Both are Non-Strict, but the second one forces the compiler to actually compile the function just so we can Call it By Name. This causes code bloat. The Lazy version will be smaller and faster. Plus, Lazy programming allows us to create infinite recursion without making the program hang:

/****************************************
*
* TO DO:
*
* get tax rate for the customer state
* eventually from some table
*
****************************************/
// function lazyTaxRateRecursive(custId) { lazyTaxRateRecursive(custId); }

Laziness is often combined with Memoization:

function getCustName(custID)
{
    // if (custID == 3) { return "John Smith"; }
    custRec = readFromDB("customer", custID);
    fullname = custRec[1] + ' ' + custRec[2];
    return fullname;
}

Outside the world of Functional Addicts, this same technique is often called Test-Driven Development. If enough tests can be embedded in the code to achieve 100% coverage, or at least a decent amount, your code is guaranteed to be safe. But because the tests are not compiled and executed in the normal run, or indeed ever, they don't affect performance or correctness.

Conclusion

Many people claim that the days of Abject-Oriented Programming are over. But this is pure hype. Functional Addiction and Abject Orientation are not actually at odds with each other, but instead complement each other.
5

View comments

Blog Archive
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.