About making pictures   - RGB vs CMYG spectrum  by  Waeshael

 

Sensors

The silicon semiconductor used for digital cameras has the ability to capture wavelengths beyond our human vision’s capabilities. Here you see a chart showing that the eye responds to wavelengths from 450 nm to 650 nm. Silicon responds to wavelengths from 200 nm to 1100 nm i.e from ultraviolet to infrared.

A good lens passes these uv and ir wavelengths to the camera sensor.

But, in front of the sensor is a glass filter that absorbs the uv and the ir. In RGB bayer filter cameras (most cameras today)  there is a color filter array that absorbs everything but red, green, and blue wavelengths (i.e yellow, magenta, and cyan are stopped by this filter array.) The color captured is converted to grayscale in the camera. Colors are interpolated from the grayscale image.

Some cameras allow the ir glass filter to be removed so that the photographer can make ir pictures. Though eyes are considered to respond only to the visible spectrum,  the eye is sensitive to uv and ir.

(Regular glasses usually have a uv coating to prevent the uv light from damaging the eye. And of course sun glasses do the same thing.  Welders have to protect eyes from the uv rays coming from the welding process. So, it is true to say the eye responds to light outside the visible spectrum.)

Below left is a plot of the colors included in sRGB which can be displayed on a standard monitor. The box around it represents the gamut of colors that the eye can see, but which cannot be displayed on a monitor. Many of these colors can be captured by film cameras using Kodachrome and Ektachrome film. The light is filtered by the film onto several layers, which, as the film is processed, are dyed with CMYG colors.

On the right is a profile called Ektaspace PS 5.0 J. Holmes which captures the colors of a typical Ektachrome slide taken of a landscape. You can see that there are more red colors and deeper blues than are in the sRGB gamut. Of course you can’t see these additional colors on the screen due to the sRGB monitor gamut, but you can still manipulate these colors in the editor if you are in LCH mode (NX2 software).

This info from ROB Galbraith for the kodak DCS 620 camera with cmyg sensor

This chart shows both an RGB and a CMY CCD response. The chart was made by combining data from the two different CCD systems. Note the green line, which is the green response of the RGB system. Compare it to the yellow line, which is the yellow response of the CMY system. You should notice two attributes.

First, the yellow curve is much larger in overall amplitude (approximately 37% yellow vs. 20% green). This occurs because of the reduction in unwanted absorption mentioned earlier. Second, you should notice that the area "under the curve" for the yellow is much larger than the green signal. Notice that yellow starts rising at about 500nm and stops responding at 700nm, but the green signal, while it also starts at 500nm, ends at around 600nm.

If you were to shine a light source that has constant energy at all wavelengths onto the CCD, the yellow signal would be about twice as large as the green signal. This occurs because the yellow signal integrates more light compared to the green signal. Since in either case the noise in the underlying CCD is constant, we also have larger signal-to-noise ratio (SNR) in the CMY system. This larger signal provides higher effective ISO, and the increased SNR provides for more ISO range.

sRGB color space vs human vision                      Ektachrome color space vs human vision

The curved background represents the human color gamut, the triangles show which colors are included in the various color spaces.

why is cmyk not being used today for digital cameras?

It is all to do with manufacturing profit. In 2002 a 4MP CMYG camera cost $1,000 (in 2015 dollars), imagine what a 16MP CMYG would have cost.

Today a 16MP RGB camera sells for less than $250.


The argument for RGB was: that due to a phenomenon of the brain which can recreate any color from various amounts of R, G, and B, you can fool it into seeing yellow even though no wavelength representing the color yellow was captured by the camera. This was an argument from the 19th cent. which has stuck, and has been very convenient for the manufacture of cheap monitors, and cameras.

But today we know that the eye/brain is a lot more complex in the way it makes color, and some manufacturers are making sensors that capture a much wider light spectrum that is closer to what the eye can capture. You must expect to have to buy monitors to see those colors, but don’t expect web browsers to be able to display them.

You can buy monitors with a wider gamut, Adobe 1998 is one such gamut. But these monitors are no good for looking at the WEB because the sRGB images you see on the web would look very flat (low contrast.)


Printers can’t produce a wider range of colors than the color space you are editing in but paper and ink can make more colors than you can see on the monitor, so you must send the printer an image with a much wider color space.


The best way to capture more colors is to use film, which is what many movie producers are doing today. In fact, Kodak is still making film, but only for the movies though a small group of camera buffs are repackaging Kodak movie film into 35mm cartridges.


Digital camera people can get better color by buying an older CMYG sensor camera.