I’ve been focused more on proving directly that P = NP by tackling an NP complete problem. For the longest time I thought the solution revolved around starting at the final state then reversing backwards to reach the initial state, but the real solution is even more profound than that. The key is to start at the beginning then reverse to the end. This is because in a reversible compputer with a finite number of states, any given state is guaranteed to loop. Irriversible computers will also innevitably loop but the state you input, unlike in reversible computers, is not necessarily a part of that loop.

My main focus is on the boolean satisfiability problem. My idea is basically as follows: use a tradional algorithm with exponential time complexity to iterate through all potential inputs to check if they’re the solutions, then once a solution is found return the bits to their original state. It’s the final part that’s tricky. As a many to 1 function is not possible in a reversible computer, returning the final correct output to the original state is difficult. There can be no simple sorting of the bits so they’re ordered as they were in the beginning as that would be a many to 1 function – it fundamentally cannot work. Therefore an algorithm must be used that’s 1-1 that returns it, and my suspicious is that the boolean test itself is that algorithm. In other words, I need to re use the boolean test in some way in order to sort the bits to their original state. I am yet to figure out precisely how to do this though. It may involve things like using it in it’s inverse state, or in it’s reflected state (so that whatever was input can be returned directly) or some combination of these things.

]]>

Cellular automata are worlds with simple rules that can result with chaotic and complex behaviour. For example, rule 30. This pyramid is produced by following the 8 keys. The 3 colours at the top produce the cell below it – it is procedurally generated starting from the top and goes down.

What’s even more interesting is that some of these rule sets are Turing complete. This means given the right input (the right set of starting points), it can compute anything. What’s more is that some of the cellular automata are reversible, in that if you know the current state, you can compute the previous state. An example of this is rule 51, in which bits are flipped. Unfortunately this particular rule isn’t both Turing complete and reversible, which is what is desired. But what’s so important about combining these two things? Consider the following thought experiment.

Person A multiplies two large primes to get a semiprime (a number which has exactly 2 prime factors) and tells person asks person B what the semiprime is and asks him to figure out what two numbers he multiplied (i.e. to factor the semiprime). Person B is aware of the location and velocity of every atom in the universe. Trying to figure out what the 2 primes were in the traditional way would take exponential time, but were person B to simply compute the past by means of simulating the universe going backwards in time, it would only take as long as it took for A to multiply those two numbers (in terms of the complexity class given that reversing time for x seconds takes an amount of time proportional to this). In other words, by focusing on reversing time we can transform an exponentially difficult problem into a mere polynomially difficult problem.

Taking this idea and applying it in a practical way can be done as follows: design a computer inside a Turing complete reversible cellular automata that takes 2 numbers as input, tests that they are prime using the AKS primality test, sorts the numbers in ascending order then multiplies them and outputs the result. To factor a semiprime, you need only set the semiprime as the output and set it to go backwards in time, and the bits would flow back into the multiplying machine and 2 primes would form as if by magic.

Were this algorithm to work, it would not on its own prove that P=NP. Only producing an algorithm that is classed as NP-complete could do this. However this is an important step forward, as it would be the first algorithm that solved what was thought to be an NP-intermediate problem. An interesting fact is that if P != NP there must be at least one problem that belongs in the class NP-intermediate, so revealing that one of these problems actually belongs in P will lower the confidence that P != NP.

Though it still would be a significantly powerful algorithm to invent. If the totient function were vulnerable to be solved in the same way (and I suspect it is), it would become trivial to break RSA encryption. Meaning that you could hack into virtually any system online with ease. You would be able to easily acquire the log in details of any user, such as their Gmail account or their online banking account. Online banking systems would be totally insecure.

One seeming contradiction in all of this is that logic gates in computers are normally irreversible. NAND, XOR, OR gates all take 2 inputs and return 1 output. In an OR gate, the inputs (1, 1) and (1, 0) both produce (0), so how can a reversible machine magically know what input produced it? It can’t. This is why traditional logic gates cannot be used. Instead, a more advanced form of logic gates are used. These are known as Fredkin gates. Their special property is that the number of 1s and 0s in the input exactly matches the amount in the output – everything is conserved. These gates are Turing complete to can be used to do any arbitrary calculation.

]]>

In the app there will be 3 types of tests each of which will be repeated many times. The first test presents the user with a circle then a different circle appears of different size and the user is asked which was larger. The second test is the same except it is a sound and the user is asked which was lower pitched. The third test combines these two tests into one: a circle appears with a beep then disappears then another circle with a beep appears. The user is then asked which one was lower pitched and larger (the lower pitch will always be paired with the larger circle).

From this data graphs can be plotted that have % difference in size vs probability it is correctly selected and so on. But what’s most interesting is trying to predict the results of the third test using only the data from the first two tests. Suppose a user has a 66% accuracy in the first two tests when the stimuli differs in intensity by 10%. What is the best accuracy he could be expected to reach in the third test when both stimuli differ by 10%? Keep in mind users can guess.

It’s tempting to say 66%, but this is not true. If in the first two tests the user is 66% confident his answer is correct 100% of the time then this would be true. But what’s interesting is that it could be the case that 33% of the time the user is 100% confident in his guess and 66% of the time he makes a blind guess. To calculate his expected accuracy we first work out the probability that he does encounter a stimuli he is certain about, which is 1 – (2/3) * (2/3) = 55.5%, then we calcualte the probability he doesn’t encounter a stimuli he is certain about then half it (as in this case he is guessing so has a 50/50 shot) which is (2/3) * (2/3) * 0.5 = 22.2%. Now add these together to get 77.7% accuracy.

If the human mind is independently analysing the sound and size stimuli, it is impossible to achieve an accuracy above 77.7%. But if there is some process in the human brain combines stimuli it becomes possible to achieve a higher degree of accuracy. My suspicion is that it’s possible the human brain does do this, and it may be possible to surpass the 77.7% limit for such a user.

An example of how this could be possible is to think of neurons as a bucket that fires when the water overflows the top. If you give it a test of size or pitch alone the neuron may be filled to 80% capacity, but if you do both at the same time it may overflow to 160% and fire away, leading the person to reach a conclusion. This isn’t meant to be an explanation of how I think the mind works, it is merely meant to be an example showing how there could exist mechanisms that mean it is not totally impossible to achieve above 77.7% accuracy.

In summary, I set out to investigate whether the human brain is capable of using data in a way that makes the utility of the data greater than the sum of its parts.

]]>

Allowing the possibilities of states that have infinite value (such as heaven) implies that any decisions that can both potentially lead to reaching these infinitely valuable states necessarily have equal value. For example, exercising for 50 minutes and punching one’s self in the face are both of equal value to someone who believes that the probability of reaching heaven after doing either of these things is not exactly 0.

Assigning a future state the value of infinity in decision systems has a very powerful destructive effect. Because a future state of infinite value has a non-zero probability of being reached, all the parent nodes have infinite value too and that will ultimately lead to the current node also having infinite value, meaning that there will be indifference to all current decisions. Thereby making it not much of a decision system anymore because it has no way to discern between any choices.

Those who do believe in states of infinite value but don’t believe that all decisions that don’t rule out the possibility of reaching all values of infinite states have equal value are forced into resolving the contradiction. Either they must accept that essentially all decisions have equal value to them, or they must deny the existence of the possibility of states that have infinite value – it’s not consistent to hold both beliefs. And it’s my belief that denying the existence of states of infinite value is the most intuitive decision that most people already agree with. I don’t have any proof that this is the correct assumption, but I think most people would agree with me that anything that results in almost all decisions having exactly the same value is not valid.

While adding in a new rule for decisions systems is a heavy handed approach to solving what seems to be a small problem in Pascal’s Wager, it actually solves the problems of the destructive forces infinities have on decision systems.

At this point you may be thinking “but wait, if heaven does last infinity long and each year has a value greater than 0, it’s value must be infinite”. In that case I would like to outline how it is possible that even though heaven can last infinitely long, it does not necessarily have infinite value even though each year has a positive value. Consider the series 1/2 + 1/4 + 1/8 + … Does it add up to infinity? It doesn’t, here’s a visual proof

Things that have an infinitely long existence in the temporal realm do not necessarily have infinite value even though each instance of that point of time has a value greater than 0.

Similarly, you can assign a probability distribution of the value of an unknown entity such that it never adds up to infinity. E.g. the probability it is worth 1 is 1/2, the probability it’s worth 2 is 1/4 and so on. This is another way to assign a positive probability to all possible values of an unknown entity without requiring you to be absolutely certain that it is not worth more than a given amount (i.e. not needing to assign a probability that it has a value above x to be exactly 0).

In summary, this post is meant to outline the fact that you must either accept that almost all decisions have the same value or that states that have infinite value do not exist.

]]>

Goals are self fulfilling prophecies. As you achieve more goals the power of goals as self fulfilling prophecies increases. This is because your confidence in them increases and in the realm of mind, your expectation of achieving a goal increases the probability that you will succeed in that goal. That’s the nature of it.

So therefore it would be a bad idea to often set yourself goals that you can never achieve, such as losing 100 pounds in a month. Because this means that the moment you set the goal you have a strong belief that you won’t complete this goal because you’ve never completed any of your previous goals.

This is why setting realistic goals is beneficial to you. Setting goals which are too easy is a waste of time. You want to be at the point where setting the goal makes the difference between achieving it or not. You would’ve lost 1 pound in a month without setting yourself that goal, making the process of setting such a goal wasteful. You need to be setting goals that you wouldn’t have achieved without setting that goal.

With this new understanding of goals it surprisingly does make sense to be happy that you completed a goal because it gives you more power over your decisions. Your belief in goals increases as a result of finishing a goal therefore goals you set in the future will have a greater probability of success.

Optimizing the goals you set for yourself is key. I recommend setting goals which you think have a 50% chance of success to maximize your growth. The further you stray from 50% the less influence you have over whether it is completed or not. If you think your chance of success is 99% then it doesn’t really matter what you do, you’re going to succeed anyway. If your chance of success is 1% then it doesn’t really matter what you do, you’re going to fail anyway.

With this new understanding of goals, I began to value them a lot more. I started to build a program to visualize goals. The basic idea is for it to visualize the probability that you succeed for a range of goals by analyzing goals you input in the past along with the record of if that goal was completed. It uses the information in a very strict way. It can only work with goals that are time based such as “jog for 30 hours in the next 2 weeks” or “spend 40 hours per week learning to program”.

Here is an image of the program.

This graph visualizes 2 goals, one was successful and the other was not. The goals shown were the following:

8 hours work in 24 hours: Success

12 hours in 24 hours: Failure

Before I go further explaining what is shown, I need to explain the logical deductions about the ability of this person to work using the information from a single goal. Here are the simple deductions:

If a person does x hours of work in y hours he must therefore be able to do x hours of work in z hours where z >= y. This is fairly intuitive – it wouldn’t make any sense for a person to not be able to do the same amount of work given more time.

If a person does x hours of work in y hours, he must therefore be able to do Minimum(0, x-z) hours work in y-z hours where z <= y. This one’s slightly harder to grasp. The basic idea is to think about the worst case scenario of how a worker might complete his work and then work for that. Perhaps there is a worker who always waits 16 hours before starting his goal. If he set a goal to do 8 hours work in 24 hours, he would succeed. But this deduction also means he can do 7 hours work in 23 hours, 6 hours work in 22 hours and so on.

These logical deductions can basically never be wrong without weird assumptions (such as a person being able to do more work with a smaller time limit). It could be true that a person is motivated more when there’s less time left but if you’re given more time then you’re inevitably going to end up with a small amount of time left at some point.

Back to the graph. Here it is again with the goals annotated.

It could be that a work does do more work given less time, but that isn’t calculated by the algorithm. It just assigns it an unknown value for that amount of time.

If the areas overlap, they mix colors. If green overlaps red it becomes yellow.

This is how the graph looks once I’ve added the goal 14 hours work in 20 hours, success. The old success goal has been totally eclipsed (because the new goal does more work in less time) and there has been some overlap creating a yellow area to represent a 50% chance of success. In reality the edge would never go straight from 50% to 100% it would be a gradient but this will be shown wen the user has completed many goals so it becomes a gradient. Trying to do it using an algorithm would result in incorrect values – it’s impossible to know just how the transition from 0% success to 100% success will be. There’s not enough data to do this (and there never will be) without potentially creating bias. By bias I mean some users could look better than other users even though they’re just as capable overall as a result of how the algorithm might change their results.

I’m still working on this, it’s a growing project. The hardest part is trying to get goals to work together. Setting goals of different types is quite tricky since one goal may be more important than another. The basic problem is that if you set two different goal types in the same period, it makes it look worse than if you just did one of those goals alone since you’d have more time to do it (since you wouldn’t spend any time on a goal you didn’t set) but it would show up on the graph the same. This is why I haven’t yet fully figured out how to use multiple goals together.

I suspect the solution is for the user to input their idea ratio of goals in. For every 2 hours of programming, do 1 hour of jogging and such. And I suspect the ideal ratio of people will change according to how much time they have. If its only 20 hour in the next week all of that should be sleep (sleeping is a goal – you aim to sleep when you go to bed) – it would be crazy to try to have less sleep than this.

]]>

In this blog post I will be referring only to hardware pixels and how the software ought to calculate them.

The RGB system that is universal is fundamentally flawed for one reason: it doesn’t allow you to state brightness. Suppose that a new monitor is invented that allows it to be arbitrarily bright – it could be as bright as the sun if you told it to be. With the RGB system it would be impossible to have any reasonable way to control a monitor. How should one tell this monitor to show red? giving ig 255,0,0 and it showing a normal red colour would mean that it could show no reds brighter than this and hence the sun could never be displayed as being brighter than red paint since 255 is the limit of the RGB system.

Here’s how it should work: specify the wavelength of the colour you want to use and then the intensity (in lumens) of the light for the pixel to emit. You can now emit all colours at any intensity level. The monitor’s drivers would calculate how to turn a wavelength into an intensity for each of the red, green and blue sub pixels. All visible wavelengths of light can be composed from red green and blue.

Mathematically speaking, the RGB system would be the same as the wavelength system except that there is a cap on how bright a pixel can be.

Most monitors allow you to change their brightness and contrast amongst other things. I think this is bad, it’s a symptom of a larger problem. This should never be necessary. The way its like this is because we haven’t yet come up with a good way of dealing with how the human eye changes in different scenarios.

If you have just been in a pitch black room and the monitor lights up, it will appear very bright. If you’ve just been outside on a very sunny day, the monitor would appear very dim and you may even struggle to read it. If I am a GUI designer I can’t possible account for this effect because there is no input that tells me about the state the user’s eyes are in. The reason the monitor would appear dark isn’t due to the brain not yet adapting to the image, its due to the eye’s sphincter restricting the magnitude of light hitting the retina. If there was a camera pointing at the users eyes this problem could be accounted for and solved easily with some nifty mathematical calculations to keep the amount of photons hitting the eye at a constant rate regardless of the size of my aperture.

One problem with my proposed system is that some monitors wouldn’t be able to display some brightnesses like the sun. Resolving this issue would not be simple. Mapping the brightnesses from the data to the range that the monitor could display isn’t trivial. Should it just set all brightnesses beyond that which can be displayed as the brightest it can display, or should it affect how the other brightnesses are displayed. For example, if it can only display 10 lumens and the brightest in the video about to be displayed is 20 lumes, it could divide all brightnesses by 2 to sustain a relative level.

But the biggest problems with systems like this is latency. It would depend on the camera detecting the size of the eyes aperture very quickly and calculating it very quickly. It’s not often that you see systems that take input in the real world analyse dynamic data quickly. It takes programming talent to make systems like this work, not processer power.

So in summary, I have proposed a new system for how monitors could work.

]]>

There is a set of sets of characters. Your task is to produce an algorithm which outputs a set of characters by taking exactly one random character from each of the sets of characters with probability 1/(total possible outputs). There is one rule. Once you select a character from a set, that character can’t be selected all remaining sets. For example with

{{A, B, C} {A,B}} if you choose A from the first set, the legal choices from the second set becomes {B} – choosing A would result in a collision and this is not legal. It is assumed that there is always a solution – you never have to worry about there being no possible solutions (so you will not encounter sets such as {{A},{A}}).

The potential combinations are {A,B},{B,A},{C,A},{C,B}. (note that {B,C} and similar is not possible since C is not in the 2nd set (output is in the same order as the sets are in))

You must make an algorithm which can pick these potential outputs with equal probability.

A naive and incorrect solution works as follows:

0. create an empty set for the output

1. start at the first set of characters

2. generate a random number from 0 inclusive to the cardinality of the set of characters exclusive.

3. Select that character and append it to the output set.

4. remove that character from the remaining sets which have not yet had a character picked from them

5. if the cardinality of the output set is the same as the cardinality of the set of sets, terminate. Else, iterate to the next set of characters and go to step 2

The problem of this is obvious. The order of the sets affects the probability of producing certain outputs and makes it wrong. If the order is {A,B}{A,B,C}, the probabilities of each permutation are as follows

A B 1/2 * 1/2

A C 1/2 * 1/2

B A 1/2 * 1/2

B C 1/2 * 1/2

But if the order is {A,B,C},{A,B} the probabilities of output is

A B 1/3 * 1

B A 1/3 * 1

C A 1/3 * 1/2

C B 1/3 * 1/2

as you can see, they don’t all have the same probability of output. this is an error. What is the solution to this problem?

]]>

To assert that an entity has general intelligence would require you to know everything. It’s sort of like how you need to know the entire Earth’s surface area before you can say that someone has explored most of it.

Artificial Intelligence has many people in it working together towards a “General Intelligence”. What they are really doing is moving towards human-like intelligence. This term is a much more accurate description of what they are working towards.

The strange thing about intelligence is that you can not understand all the thoughts of an entity smarter than yourself. This means that a human could never intentionally create an AI greater than themselves. It would require some kind of randomness to create the AI. If you keep generating AI by means of simply producing random code, eventually, you will produce something more intelligent than yourself.

Ants are smarter than humans when it comes to working in large crowds. Humans often crush themselves to death in large crowds. It’s very surprising to say the least that ants behave more intelligently than humans. They have almost no brain compared to us. We can build fusion reactors and smash particles into each other at the speed of light but the tasks that ants accomplish every day are too much for us.

Swarm intelligence is an intelligence we lack. When we’re being crushed to death by people running away from a fire in a crowded building, people scream. This makes it hard. I’ve never really understood this response, screaming achieves nothing other than making it more difficult for those around you to communicate. If you were in isolation screaming does make sense. Anyway, the simple rules that ants have triumph over our neocortex. It reminds me of cellular automata in which simple rules can create complex beautiful behavior.

I strongly advise against using the phrase “general intelligence” again. Use human-like intelligence instead. Perhaps it will turn out that there’s a huge space of problems that we cannot comprehend that are parts of everyday life. That we’re not even aware of. I sometimes get the idea that, perhaps just as we can only view a small part of the visible spectrum (and the non-visible spectrum does have profound consequences on our body) perhaps our mind can only ‘see’ a very small space of problems.

]]>

The problem here is that when the view count is 0 the colour is grey – it should just be a lighter shade of yellow. The sudden shift from 1 as yellow to grey as 0 suggests there is a fundamental difference between 1 and 0. There isn’t, as I explained in my my https://emphatious.wordpress.com/2012/10/12/zero/. Wikipedia correctly uses a colour to show a semantic difference. White is used to represent the fact that there is no information available in the following graph.

However, this graph isn’t without its own problems. Can you figure out what the problem is? It’s incredibly difficult to realize the problem as it is a very deep one. Think about colours.

The most obvious problem is that the graph does not use a gradient. groups of GDP are lumped together for no reason. There should be no groups. instead of $800-$1600 being one colours, $900 should be a little more red than $850, $950 a bit more red than $900 and so on. They are essentially rounding figured for no reason. If you had a graph showing $11.53 million profit in Q1 and $12.42 million profit in Q2, would you round them? Rounding them serves no purpose. All it does is reduce accuracy… nothing is improved.

There is a much more subtle assumption made by this graph. It might not be wrong but it isn’t explained. The assumption is that the utility of money increases logarithmically. Look at the groups, they increase logarithmically.

The designers of the graph took it upon themselves to change how the information is shown. Instead of something that’s 10x the size quantitatively being 10x brighter or bigger (10x more intense in some way represented visually) on the graphic, they’ve made it log_10 times brighter. That’s an arbitrary operation done without justification. I do expect that if I were to ask the designer of the graphic why he did this, he would reply with the unforgivable line

“because if it weren’t like that, it’d be hardest to tell the difference between countries with low income. For example, almost all of Africa would be very similar in colour”.**GRAPHICS SHOULD NOT BE USED AS A WAY TO SHOW A LIST OF FACTS.**

If your goal is to let everyone know the exact GDP of countries, use a table not a graphic.

It is not a dilemma that people would see only a small difference in the colours of African countries *because there is only a small difference in the GDP of African countries*.

One last puzzle. Can you spot what’s wrong with this graph?

The problem is that again they’ve grouped things together when not necessary. But more importantly, look at the way the “Barely Dem” and “Barely GOP” are coloured. They are different to the others in that their centers are not coloured. This destroys the semantic meaning of the colours of the graph. It was originally that the colour represents the strength of the likelihood but not the colour AND how much white is in the center also represents it, adding complexity without adding any information.

One last error in this graph is that they have changed the size of Alaska. This is only confusing. The graph is being totally warped by the designer who felt that the shape of Alaska was awkward to work with.This makes the graph now entirely subjective. The designer could’ve also changed the shape of all other states – the integrity of the graphic is entirely destroyed by this decision.

]]>

“To predict where the ball will go, and where the fielder should also go, would appear to require the solution of a rather overwhelming set of complex simultaneous equations. These equations need to be constantly recomputed as new visual data streams in. How does a ten-year-old Little Leaguer accomplish this, with no computer, no calculator, no pen and paper, having taken no calculus classes, and having only a few seconds of time? The answer is, she doesn’t. She uses her neural nets’ pattern-recognition abilities, which provide the foundation for much of skill formation.”

This assertion that she is not solving the equations is wrong. She is solving the equation but only in a way that doesn’t lend itself to being expressed in math. The solution to the simultaneous equations are in her brain unable to be expressed by her. Some people struggle with expressing emotions. I argue that we also struggle expressing the solutions of complex equations we solve.

To think it is pattern recognition is absurd. No two balls ever take the same path, how can it use pattern recognition? To transform the ball’s path into a previous pattern we’ve seen would be more computationally expensive than solving the simultaneous equation. It wouldn’t allow us to predict the path of a ball in windy conditions or when the ball has spin or anything of the sort.

The conscious mind does not give us unrestricted access to the computations our neurons are doing. Why would anyone believe it could? You can’t access the part of your brain that’s telling your heart to beat or your stomach to pump food. We can only observe a tiny fraction of our minds and how it functions. This makes many people uncomfortable because they identify with “their” conscious mind and enjoy living in the fantasy world in which they have absolute conscious control over their decisions, fueled by the illusion that is free will. If I say “why did you choose to play football instead of doing your homework?” saying “I don’t know” leaves many unsatisfied. As though not knowing is a reason to not have done that. There is a lack of consistency because if people were asked how they were walking (and express it in computer language or math) they would be unable to yet one wouldn’t expect that person to then stop walking.

I don’t fear doing something without a reason for it becoming consciously apparent to me and neither should you. You’re really blocking your own intuition and your greatest weapon – the unconscious (subconscious) mind. A recent article I read on this subject was about crows solving problems without planning their actions. I’m going to interpret the results differently. When we do things on the fly is called improvisation. I think that what we call planning is actually just improvisation in a virtual world that we have constructed by imagining it. In other words, the same processes in our unconscious mind (the processes of solving the problem at hand) when we are in the real situation than the imagined one. It’s just that we are storing the improvisations we made in the virtual world in our memory so that once we encounter the real scenario we have already had a head start spending time improvising the answers. It is not that when we plan things it is our conscious mind overseeing this whole thing.

Now one mind say that my point is wrong because the conscious mind is the one that solves improvisation problems but this is not true. I see it as the unconscious mind solving problems then that solution rising up (“falling down “is perhaps the language I should have used) to the level of consciousness. The conscious mind then swiftly takes credit for solving this problem. It reminds me somewhat of the quote “good artists copy, great artists steal”. This quote was brought to my mind for unknown reasons but I still feel its relevant. I don’t need to be able to express justification for everything I do in order to be satisfied with it.

I’m guilty of separating the brain into conscious and unconscious parts. This isn’t quite right. Our brain is just one big mush of neurons and somehow it works. To split it up in this way isn’t going to be very successful.

]]>