The Inadequacy of the RGB system

Pixel is a poorly defined term. It can mean both a hardware pixel, something that your monitor is made up of, or it can mean software pixels, what a bitmap is made up of. Wikipedia starts out with admitting this in its first sentence “In digital imaging, a pixel, or pel, is a physical point in a raster image, or the smallest addressable element in a display device”. The article should be split into software pixel and hardware pixel to avoid confusion.

In this blog post I will be referring only to hardware pixels and how the software ought to calculate them.

The RGB system that is universal is fundamentally flawed for one reason: it doesn’t allow you to state brightness. Suppose that a new monitor is invented that allows it to be arbitrarily bright – it could be as bright as the sun if you told it to be. With the RGB system it would be impossible to have any reasonable way to control a monitor. How should one tell this monitor to show red? giving ig 255,0,0 and it showing a normal red colour would mean that it could show no reds brighter than this and hence the sun could never be displayed as being brighter than red paint since 255 is the limit of the RGB system.

Here’s how it should work: specify the wavelength of the colour you want to use and then the intensity (in lumens) of the light for the pixel to emit. You can now emit all colours at any intensity level. The monitor’s drivers would calculate how to turn a wavelength into an intensity for each of the red, green and blue sub pixels. All visible wavelengths of light can be composed from red green and blue.

Mathematically speaking, the RGB system would be the same as the wavelength system except that there is a cap on how bright a pixel can be.

Most monitors allow you to change their brightness and contrast amongst other things. I think this is bad, it’s a symptom of a larger problem. This should never be necessary. The way its like this is because we haven’t yet come up with a good way of dealing with how the human eye changes in different scenarios.

If you have just been in a pitch black room and the monitor lights up, it will appear very bright. If you’ve just been outside on a very sunny day, the monitor would appear very dim and you may even struggle to read it. If I am a GUI designer I can’t possible account for this effect because there is no input that tells me about the state the user’s eyes are in. The reason the monitor would appear dark isn’t due to the brain not yet adapting to the image, its due to the eye’s sphincter restricting the magnitude of light hitting the retina. If there was a camera pointing at the users eyes this problem could be accounted for and solved easily with some nifty mathematical calculations to keep the amount of photons hitting the eye at a constant rate regardless of the size of my aperture.

One problem with my proposed system is that some monitors wouldn’t be able to display some brightnesses like the sun. Resolving this issue would not be simple. Mapping the brightnesses from the data to the range that the monitor could display isn’t trivial. Should it just set all brightnesses beyond that which can be displayed as the brightest it can display, or should it affect how the other brightnesses are displayed. For example, if it can only display 10 lumens and the brightest in the video about to be displayed is 20 lumes, it could divide all brightnesses by 2 to sustain a relative level.

But the biggest problems with systems like this is latency. It would depend on the camera detecting the size of the eyes aperture very quickly and calculating it very quickly. It’s not often that you see systems that take input in the real world analyse dynamic data quickly. It takes programming talent to make systems like this work, not processer power.

So in summary, I have proposed a new system for how monitors could work.

Advertisements