About making pictures   - Sensors  by  Waeshael

 

Sensors

The sensor is a piece of silicon which houses millions of transistors that are sensitive to a part of the visible spectrum, and generate electron flow related linearly to the number of photons that land on the transistor. When the shutter opens, photons of light hit the transistors until the exposure is stopped.

Sensors vary in area from about 1/4 inch square to 1 inch by 1.5 inches.

On the NEX-5 the size is 23.4x15.6mm. The area of the sensor that responds to the light is called a photosite, and on the NEX-5 there are about 15 million of them, of which 800,000 are used to measure camera performance, and 14.2 MP are used to create the picture. There are huge gaps between the photosites and the light that doesn’t touch any photosite and is absorbed into the circuitry, is not available to contribute to the image. If you could look at the photosites you would see something that looks like a newspaper picture under a magnifying glass. There are lots of dots. Different sensor design have different structures of photosites. Some are more efficient at light collection than others.  In some cameras less than 20% of the light that comes through the lens is actually used to recreate the scene. The camera designer (software) must recreate the original scene from what little light has been captured.

Where there are gaps in the image, the camera looks at data from adjacent sites to determine what color might have been there. The more photosites there are, the better it can guess. Some color filters are designed to spill light from one area onto adjacent areas. SONY has developed a filter array that prevents light from spilling into adjacent areas, to increase efficiency in demosaicing which is the process that reconstructs the scene, Each manufacturer uses a different process. You have little control over how this reconstruction is done, except that if you use a RAW software program, you can choose different demosaicing methods offered by the image editing software - different than what the designer intended - which may make an improvement for a particular purpose. For example: the Fuji professional cameras have twice the number of photosites than are advertised, and these extra sites are used by Fuji to improve the highlight quality but only if the demosaicing is done by factory approved software. Using Fuji software or SIlkypix, you can decide how these additonal data will be used - to control the dynamic range or to increase resolution. If the scene is bright and the extra photosites receive enough light, then these data can be used to increase resolution from 11 MP to 22 MP.

There is a standard sensor array called a Bayer filter array, and the structure of the sensor is well known so that there is a standardized way to demosaic the data. But in the case of SONY, and FUJI and FOVEON, the sensor filter array is very proprietary, and it is not wise to mess with the demosaicing method. Third parties, like Adobe (Lightroom and Photoshop), DXO, and  ISL( SIlkypix) have attempted to reverse engineer the software, so that they could provide a way to do demosaicing. In some cases the camera manufacturers provided the key to Adobe (Canon and Nikon etc.) Images demosaiced by different software produce different looking images, and it is difficult to make the images match.


Sensors (cont.)

There are some other differences in sensor design that affect the final image.

First there is a difference in the method of collecting the light. One is by a Charge Coupled Device (CCD), which were the majority of sensors until the phone revolution, and the second is the CMOS method (Complimentary Metal Oxide SIlicon) which is now all the rage because it is cheaper and requires less power than a CCD, which is important for phones. There are pluses and minuses for each device - let’s not discuss this. We are going to all be married to the CMOS device due to manufacturing costs.


Now there is also a difference in the optical filters that are in front of the sensor. The filters divide up the light signal by filtering out all light except that of a specific color. The most common filter array consists of filters for red, green and blue light. But some filters were made for Cyan, Magenta, Yellow, and Green light - many cine cameras and digital cameras in 2000 - 2003 used this type of filter, and you can still buy cameras with this type of filter array which was installed in SONY, Canon, Toshiba, Leica, Casio and Lumix models.

The Lumix DMC-LC5 I use, and the Digilux 1 have CMYG filter arrays.

The immediate benefit is that this array lets in twice the wavelengths of visible light, so that the sensor not only gets more light, but it get more light from more of the scene  (more “colors”.) When you compare the images from CMYG sensors with RGBG sensors you see the increased reds and blues of the CMYG sensor. When you make pictures with a camera with this filter array you will be impressed with how good fall colors look.

The sensor size is 1/1.7 inch, (7.6 x 5.7 mm) quite large for a compact camera.


The other benefit is that exposures times are halved. So if you shoot at ISO 100, then an RGB camera with the same lens would have to be set to ISO 200 to match the exposure time. And since the Summicron lens on the DMC-LC5 and DIgilux 1 is an f2.0, any camera that has an f4 lens would have to be set to ISO 800 to match the exposure time. Now, photon noise is the same for all cameras, but electronic noise is dependent on ISO setting. The higher the ISO the more amplification noise begins to show in the image. To compensate for this, a noise suppression device is used to adjust the image at the RAW stage, and you won’t see the noise due to the higher ISO. But, the consequence of this is that the dynamic range suffers, and the shadows and highlights must be compressed. You lose the data in the shadows as the ISO is increased. The camera has been optimized at its base ISO. Any increase in ISO reduces both tonal variations and dynamic range. Manufacturers have been very clever in hiding this from the photographer, and most people have no clue as to what is happening to their picture. The resulting picture is very much a creation of the software, and if you were to compare the original scene to what your RGB camera produced you will be mystified. Luckily for the designers, the picture is looked at after the photographer has left the scene, so there is no way for the user do a direct comparison - except in a studio setting.


Fall colors with a CMYG filter array

DMC-LC5 and Digilux 1

Most cameras use RGBG sensor designs. Fuji and SONY have been experimenting with different filter arrays. Leica has a sensor design for the Monochrome M that has no filter array, and can only make B&W pictures. In development are new designs that collect a wider band of the wavelengths by using Red-Blue and Y-Cyan filters. And they are doing this in order to reduce the noise levels in phone cameras, by collecting more light, and so be able to reduce the size of the sensor (thinner) and  reduce power. Some of this will spill over to larger camera sensors. A phone sensor may cost only $1 to make, but any sensor larger than APSC currently costs upwards of a $1,000. Which is why the Leica cameras and professional SLRs that have sensors larger than APSC cost more than $5,000.