At the edge of the world: Ruby NilClass versus Eiffel NONE
This post is an analysis of the difference in philosophy between static and dynamic typed languages and the pragmatic ramifications of these philosophical differences. I will use Nil/Null/None-types as basis for the example.
The Nil-value is used in OO programming languages to designated a non-assigned value. Any object reference can be nil, which gives it a special role. In effect, Nil is an instance of all types. This is a concept that can only be expressed in a language that supports multiple inheritance. As far as I know, the only programming language that expresses this “correctly” is Eiffel, where the type NONE inherits from all other types in the system. In a language that checks that no object may receive a message that is not defined by its type, this is the only logical conclusion. A variable of any type can be assigned the value NONE, because NONE is an instance of all types. Other static languages express Nils as a special exception to the type hierarchy. For example in Java, most klunky special-case code that I encounter is code for dealing with nulls.
In contrast, Ruby defines NilClass independent of all other classes. As Ruby variables do not have types, any variable can be assigned to any object, independent of its type. There is no reason you could not make your own NilClass if you wanted to. This works through the principle of Duck Typing: If it responds to the message walk() like a duck and it respons to the message quack() like a duck, it’s an instance of type Duck. The class declarations and inheritance hierarchy has nothing to do with it. This means that the analytical concept of type is separated from the implemenation of types. It also means that there’s nothing really special about nil.
At first, Duck Typing seems like a philosophically sloppy, but pragmatic solution to the problem of typing. “If we weren’t so lazy, we could categorize the world neatly and correctly into structures of types and subtypes”. However: The world is not like that.
When I first graduated from college, I imagined the world could theoretically be described mathematically. Since then, I’ve learned a few things. One of these things is that categorization is artificial. Categories like color, species, grammatical classes, and even gender are human simplifications: They do not exist in the external world as well-defined things (the same can be said about the distinction between static and dynamic languages, incidentially). When we talk about them, it is always as a pragmatic shortcut, much in the same way as with duck typing in dynamic languages.
The problem with strict categorization is not only that it is time-consuming - it’s that it’s not an accurate description of the real world. Philosophically, static languages build on a flawed understanding of the world.
PS: This does not mean that I’d always go for a dynamic language. Static languages have some technical advantages over dynamic languages. Most importantly, tools for working with static languages are currently more advanced.