The Nature of Computation


If you were building an AI for a game of chess it would be strange if you did anything other than modelling the game. But take a look at the universe we live in and the best equations to explain the reality. Some physicists will tell you that time does not exist. That time and space are really made of the same thing. It is my conception that if we put all the greatest minds together working on a chess AI, they would come to the conclusion that there is something more fundamental that the pieces and the squares. There is some more fundamental substance that they are both constructed from that would serve us better when creating an AI. Just as space-time  is named, we’ll call this fundamental matter square-piece.

This square-piece would allow a much more effective AI. An argument against this might be that we don’t know what reality is defined as, but chess *is* defined as pieces on a board obeying various rules therefore it can’t be anything else. This is wrong however. Just because a problem can be defined in a certain way does not mean it cannot be reduced to a simpler problem. Reducing the game so that the AI can make computations on the fundamental matter would be more effective.

I enjoy making wild statements that are difficult to verify (this seems to ruffle some feathers at times but I don’t care), so here’s another: our brains are actually operating on this fundamental square-piece when playing chess. This is how we can still compete current day chess AI that just iterates through potential scenarios on a weak processor that has a ‘mere’ 100 million transistors on it.

I think to exploit this square-piece a cellular automaton is needed. The space of algorithms that can be designed by humans is infinite in size but that does not mean it explores the entirety of the space of algorithms. There are many algorithms our brains could never comprehend that are essential to building intelligent systems. Instead of designing a CA, we need to search for one that does what we need. Play chess well.


Elementary CA? Not so fast.


I was exploring the computational universe when I suddenly decided that instead of becoming more complex with adding more colours in my CAs, I’d go backwards. I’d stick with 2 colours and also only allow 2 cells to determine the next cell. The image above shows the result.

This CA is equivalent to rule 60  with a shift to the left. This is because rule 60 essentially ignores the third cell – in all cases if X Y Z produces A, then X Y (not Z) also produces A. This allows it to be used in a 2 cell CA.The Wolfram Alpha page explains that there is no dependency on the 3rd cell in the neighborhood dependency.

This rule shows class 3 behavior with random initial conditions. With a single white cell there is still complexity:


Wikipedia states “In mathematics and computability theory, an elementary cellular automaton is a one-dimensional cellular automaton where there are two possible states (labeled 0 and 1) and the rule to determine the state of a cell in the next generation depends only on the current state of the cell and its two immediate neighbors”.

I completely abhor this assumption that there exists a current cell. There is only input and output, no need to make it any more complicated than this.

This links back to my ship of Theseus argument. People assume that a ships exists through time, that it’s the same ship and it changes state. But really that’s just a more complicated definition than it needs to be, but it’s useful in very day language. You can define things how you like but I will criticize you if those definitions limit your thoughts (more than another definition).

It also relates to my Haskell post in that in Haskell, there is no state. Only input and output. That’s what makes it so mathematically rigorous.

Artificial Intelligence and NKS


For those who aren’t familiar with cellular automata, a quick explanation. The panel on the right side of the image are the keys. The left side are the cells generated from the keys. The first key (all red) says that if the three cells above a given cell are red, red, red, that cell below them is red. The next key green, red, red says that if a cell has green, red, red directly above it then that cell must be green. To generate some sort of image you need to input an initial condition. I had random conditions. The picture goes on for much longer than this.

This particular cellular automaton is reversible (this is rare) – you can always deduce the previous cells given a row.

It is in my mind that the computations taking place in the brain are like this cellular automaton. It looks very natural. There is no real structure to it. There are simple rules which create extremely complex behavior. It’s effectively impossible for humans to design an algorithm which would look like this when drawn – this stuff is beyond human comprehension.

I believe that every single cellular automaton does some specific useful computation, it’s just quite difficult figuring out what computation that cell is doing. The strategy I will take is creating an artificial intelligence by finding a cellular automaton that does the job I ask it to do, rather than designing one from scratch. For example, if I wanted a cellular automaton to add numbers together, I would input say, R R R W R R R W to represent 3 + 3 and find a cellular automaton which contained R R R R R R W given that input. This would represent 6.