I thought it was about time I wrote about topics where I’m an amateur. This time: Experimental philosophy.

As a computer programmer, I often entertain myself with writing computer programs. Last Easter I stayed up a few nights playing with an insignificant, but entertaining program. During a discussion with my philosopher uncle, I discovered that this program might provide some insight as to why determinism is, if not dead, then at least lame.

The game of Blokus is a strategy game for four people. Each player places Tetris-like pieces on a shared board in succession according to a set of restrictions. The winner is whoever has the fewest pieces left at when no player can put any more pieces down.

This last Easter, I enjoyed some computer programming fun with the game of Blokus. First, I made a program that allowed players to put pieces on a computer board in succession (so called hot-seat multiplayer, since all players will be using the same computer, and presumably, the same chair to play). After I had gotten this to work, I started looking to see if I could make the computer “solve” the game. That is: Is there a configuration that allows all players to place all their pieces on the board?

My attempts at “solving” the game didn’t go too well, but it illustrates the limits of philosophical determinism. My crude attempts basically was to try to have player 1 place a random piece, then have player 2 place another random piece etc. When a player no longer can place a piece, I undo the last move, and try another random move. Eventually, I will have tried all possible games.

The problem with this approach to solving a computer problem is that it takes a very long time. If each round has, say 20 possible moves, and each game takes over 80 moves, that comes out to … a lot of possible games. My program would by all likelihood still be searching for a solution when the sun explodes about five billion years from now.

But if I was extremely lucky with my random numbers, I could have found a solution on the first try.

At this point, a brief detour into the world of random numbers on a computer is needed. For most purposes, there is no such things as a purely random number on a computer. Instead, the computer generates a sequence numbers by using a mathematical formula that is complex enough to be practically unpredictable. But if you know the starting condition of the random number generator, all subsequent numbers are fully determined.

To see why this is so, lets examine an algorithm to generate random numbers: Linear Congruent Random Number generators. In a very simplified example of this algorithm, we generate a sequence of random numbers as follows: Start with the number of seconds since midnight. For each new number, multiply the previous number by 51 and add 67 (these numbers are chosen arbitrarily for this thought experiment) and remove everything except the last four digits. If we need to pick one possible move out of one hundred options, we can look at the last two digits of this sequence of numbers.

So when I run the program, the starting state of the number generator is determined by the computer clock. If I start when I get back from lunch exactly 12:00:00, the first five moves would be number 91, 8, 75, 92 and 59. If I spent three minutes getting a cup of coffee before running the program at 12:03:00, the moves would instead be 47, 64, 31, 48 and 15. And if I took a sip of coffee first and started the program at 12:03:01, the moves would instead by 98, 65, 82, 59 and 66.

If the clock was used like this to set the initial state of the program, the whole game would from that point on be perfectly determined. Start the program again at the same exact moment, and the same exact thing happens. If there is a solution to the problem, some starting points will result in that solution being produced.

If you were able to read the electrical state of the program at the starting time, this information would be enough to calculate whether a solution of the game would be found.

This property of random number generators can actually have financial consequences. In his book “Exploiting Software”, computer security expert Gary McGraw uses as an example an online poker site that had less than perfect understanding of the ramifications of the computer being deterministic.

In order to show its users that it was fair, this particular gambling site had published the strategy the computer used to randomly “shuffle” the “deck” in the game. Using this information and a clock, an intruder would be able to determine about 10 000 possible random number sequences that could’ve been used to shuffle the deck. By looking at the initial five cards visible to each player, the intruder was further able to narrow it down to one random number sequence. This meant that he would know every card that had been dealt in the game. In the game of poker, knowing you opponents cards is a big advantage.

So we see that computer programs are deterministic. Given the same starting conditions, the computer would play the same sequence of games every time. For a computer version of a game like poker to be meaningful, it is critically important that the causality is one-way. If you can determine the starting conditions based on the observed state, the game loses it point.

Furthermore, a computer program like this is susceptible to the butterfly effect. If the state of the random number generator in my Blokus program was changed just slightly, the program would play a totally different sequence of games. Not only would the sequence of games be totally different, it’s would be unpredictably different. The game played one second will bear no resemblance to the game played the next second.

If the starting condition of the computer was know to you, you could of course find out what sequence of games would be played just by running the computer program. But it is quite unlikely that anything short of running the program would be sufficient. So in this case, since the program itself is what we want to determine, there is simpler way to calculate the outcome other than observing the outcome. And even a slight mistake in the starting conditions will translate into a completely different end result. Fans of Douglas Adams will appreciate the similarity with the computer built to find the question to which 42 was the answer.

And here is the conundrum of any would be supporter of material determinism: Given that my little computer program is absurdly simpler than the world, is it meaningful to talk about determinism? No system simpler than the world itself would be able to simulate the outcome of a set of starting conditions, and a small change in the starting conditions will have a butterfly effect on the outcome. If determinism isn’t dead, it’s at least lame.