How An image is made   by  Waeshael

 

The light that touches the sensor makes electrons in the material move from one location to another and this is detected by the electronics in the camera. The number of electrons at specific locations (under each area of the filter array) are counted and the total count (during the exposure time selected) is recorded temporarily. At the end of the exposure, there will exist millions of these “counts,” which will be converted into a digital image by a very clever process which each manufacturer of cameras keeps secret.


This digital image cannot be seen by the eye. What happens next is that the digital image is sent to a computer which sends electrical signals to a monitor to stimulate various light emitting diodes (LED)  embedded in the screen material, and these LEDs then produce light of varying wavelengths and quantities which the eye responds to. The wavelengths the eye receives from the monitor is nothing like those from the original scene, for there is no wavelength for yellow light, and other colors. All that can be sent from the monitor are wavelengths from the blue, green and red parts of the sunlight spectrum.

But what scientists have discovered is that you can fool the brain into making yellow color by sending a mix of red and green wavelengths. The brain sees this mix and creates a sort of yellow. Now, it is not the same as the yellow in the scene that the eye views directly, and if you put the monitor outside and viewed the scene at the same time you would notice this difference. This also is true of colors like fire engine red, and deep saturated blues. Even the colors in the sky that are typically cyan near the horizon, will look more blue on the monitor. There is not much you can do to make the colors on the monitor like the colors in the scene.

But you can make color prints that look more like the scene because the ink colors available in the printer can make more wavelengths than the monitor.


A long time ago, professional cameras both still and movie used different color filters in front of the chip, in order to capture more of the colors in the scene. More than twice the number of wavelengths were captured using filters of Cyan, Yellow, Magenta, and Green (CMYG filter.) So the chip at least “saw” these additional wavelengths. But somewhere in the process the computer had to convert the CMYG image to an RGB image, in order to display the image on a monitor. Those cameras were able to keep more of the yellow “color” from the original scene, as well as cyan and magenta (all natural colors in landscapes) and you can see this in the images. All cameras with this filter array (CMYG) produce more natural and more vibrant landscapes.


In the following pages I am going to be talking a lot about cameras that use CMYG sensors.

But the first camera I will mention has an RGB filter but in a very special arrangement and it has some benefits over the more normal RGBG “Bayer filter array.”


1: Sunlight illuminates the scene with a wide band of energies which bounce off objects and reach our eyes.

There is no color in sunlight. The rods in the eye are insensitive to the energy of the light, and respond only to the quantity of light. On the other hand the cones in the eye are sensitive to particular wavelengths (or energies) of light and each cone transfers a particular electrical signal to the brain via the optic nerve. The brain then paints various colors into our head and makes a picture in our head. We have no way of knowing whether this picture represents reality, but we have come to assume that it does and we respond accordingly with our bodily movements - we may smile, or be frightened, or turn and run. Or we can shut our eyes and the scene will go away in our head. When we sleep and dream the physical world goes away and is replaced by another world created by our mind.


2: The camera lens receives the same band of energies but transmits only some of them to the interior of the camera.

Which energies come through depends on the type of glass and the type of anti-reflective coatings applied to the glass. Some glass makes “warmer” pictures than others. The glass distorts the trace that the light makes through the lens and some of the light is misdirected to the sensor, so it appears to come from a different part of the scene. This is like astigmatism in normal eyesight - we see “stars” for points of light, and blurring of some objects.

So here comes the first distortion of “reality” by the lens design. The most expensive lenses distort the least. The smaller the aperture, the less the distortion (there is a size of aperture at which another distortion called diffraction occurs.)


At the exit pupil of the lens, the camera has a filter to remove ultraviolet frequencies and some cameras remove infrared frequencies also. We usually don’t notice these wavelengths in a scene, but the eye does detect them - we use UV filtration glasses when we go outside to prevent the UV reaching the eye and damaging it.  After this UV/IR filter is a set of micro-lenses that whose purpose is to direct light from one part of the scene and merge it with the light from another part of the scene, because the sensor design is such that there is no sensor material in at least 25% of the chip and light falling on those areas would be lost. That is, the light from a point in the scene that would fall on the dead spot on the sensor, could never appear in the final image, so it is squished sideways to fall on a sensitive area. The eye doesn’t do this. So, here is the second distortion taking place - the misalignment of the original scene data for the purpose of capturing more light and making the camera more “sensitive.”


Following the lenses is a filter array that covers the entire chip. This allows only certain frequencies of light to reach the chip. It modern cameras only one third of the light frequencies ever reach the chip (sensor). Two thirds of frequencies (such as those that make yellow in the brain) are filtered out. This is the third distortion that occurs in the camera system.


By now, you may be wondering how on earth we can make an image that looks anything like the original scene.

Logo.html
Main.html
AboutPictures.html