Creating an app to investigate a theory about how humans process data

I suspect the human brain is capable of combining data in a way that allows it to arrive at correct conclusions at a higher degree of accuracy than what at first seems to be possible given the limitations of the data it used to come to that conclusion. By writing an app I can precisely control the data given to the user to measure these effects.

In the app there will be 3 types of tests each of which will be repeated many times. The first test presents the user with a circle then a different circle appears of different size and the user is asked which was larger. The second test is the same except it is a sound and the user is asked which was lower pitched. The third test combines these two tests into one: a circle appears with a beep then disappears then another circle with a beep appears. The user is then asked which one was lower pitched and larger (the lower pitch will always be paired with the larger circle).

From this data graphs can be plotted that have % difference in size vs probability it is correctly selected and so on. But what’s most interesting is trying to predict the results of the third test using only the data from the first two tests. Suppose a user has a 66% accuracy in the first two tests when the stimuli differs in intensity by 10%. What is the best accuracy he could be expected to reach in the third test when both stimuli differ by 10%? Keep in mind users can guess.

It’s tempting to say 66%, but this is not true. If in the first two tests the user is 66% confident his answer is correct 100% of the time then this would be true. But what’s interesting is that it could be the case that 33% of the time the user is 100% confident in his guess and 66% of the time he makes a blind guess. To calculate his expected accuracy we first work out the probability that he does encounter a stimuli he is certain about, which is 1 – (2/3) * (2/3) = 55.5%, then we calcualte the probability he doesn’t encounter a stimuli he is certain about then half it (as in this case he is guessing so has a 50/50 shot) which is (2/3) * (2/3) * 0.5 = 22.2%. Now add these together to get 77.7% accuracy.

If the human mind is independently analysing the sound and size stimuli, it is impossible to achieve an accuracy above 77.7%. But if there is some process in the human brain combines stimuli it becomes possible to achieve a higher degree of accuracy. My suspicion is that it’s possible the human brain does do this, and it may be possible to surpass the 77.7% limit for such a user.

An example of how this could be possible is to think of neurons as a bucket that fires when the water overflows the top. If you give it a test of size or pitch alone the neuron may be filled to 80% capacity, but if you do both at the same time it may overflow to 160% and fire away, leading the person to reach a conclusion. This isn’t meant to be an explanation of how I think the mind works, it is merely meant to be an example showing how there could exist mechanisms that mean it is not totally impossible to achieve above 77.7% accuracy.

In summary, I set out to investigate whether the human brain is capable of using data in a way that makes the utility of the data greater than the sum of its parts.

The Nature of Computation

Image

If you were building an AI for a game of chess it would be strange if you did anything other than modelling the game. But take a look at the universe we live in and the best equations to explain the reality. Some physicists will tell you that time does not exist. That time and space are really made of the same thing. It is my conception that if we put all the greatest minds together working on a chess AI, they would come to the conclusion that there is something more fundamental that the pieces and the squares. There is some more fundamental substance that they are both constructed from that would serve us better when creating an AI. Just as space-time  is named, we’ll call this fundamental matter square-piece.

This square-piece would allow a much more effective AI. An argument against this might be that we don’t know what reality is defined as, but chess *is* defined as pieces on a board obeying various rules therefore it can’t be anything else. This is wrong however. Just because a problem can be defined in a certain way does not mean it cannot be reduced to a simpler problem. Reducing the game so that the AI can make computations on the fundamental matter would be more effective.

I enjoy making wild statements that are difficult to verify (this seems to ruffle some feathers at times but I don’t care), so here’s another: our brains are actually operating on this fundamental square-piece when playing chess. This is how we can still compete current day chess AI that just iterates through potential scenarios on a weak processor that has a ‘mere’ 100 million transistors on it.

I think to exploit this square-piece a cellular automaton is needed. The space of algorithms that can be designed by humans is infinite in size but that does not mean it explores the entirety of the space of algorithms. There are many algorithms our brains could never comprehend that are essential to building intelligent systems. Instead of designing a CA, we need to search for one that does what we need. Play chess well.

Why Open Source Innevitably Prevails

Software companies can’t keep adding useful features to software. Eventually they will run out ideas for useful features. But the pressure to innovate and differentiate themselves from their rivals is immense. Their solution is to continue adding features and tell themselves that those new features are useful when actually they are useless. Those features will become known as bloat; the plague that haunts us all.

FOSS (Free Open Source Software) will eventually catch up to the closed source solution. It’s inevitable because there is a limit to how many useful features software can have.

An analogy: suppose the common ruler is invented for the first time. A company sells it for $100 and makes large sums of money. Another company comes along and figures out how it is produced and then sells it for $0. The original company is then pressured to innovate, so they add a compass to the ruler. It can now tell you which direction north is in. But this is just bloat, it only makes the item harder to use.

We’re finally beginning to see that our desktop computers have enough RAM for the vast majority of software. The number of programs which would improve from the user having more RAM is decreasing rapidly and this is why we’re seeing a decline in desktops. Desktops used to be the only place most programs would run because they were the only things powerful enough. But now the smart phone and laptop have begun to dominate.