Digital Image Sensors




What is a megapixel?

A megapixel is quite simply, one million pixels. The number of pixels
on the sensor determines the megapixel value of a camera. Multiply the number of pixels wide by the number high to work out the total megapixels. A typical camera as images that are 3888 wide x 2592 high. This gives a total of 10,077,696 pixels or 10.07 megapixels. The reason you multiply this value by three to work out the megabyte size of a file is so that you take account of each of the three colour channels - red, green and blue.

A single pixel, with only luminosity information and without any colour, is roughly 1 byte. If you now add the three colour channels to that, it becomes 3 bytes - 1 byte per colour channel.

Digital Image Sensor
Digital image sensors are the vital part of your digital camera. They are the light sensitive 'film' that records the image and allows you to take a picture. But how does it work, and what do all the names and numbers mean?
A digital camera sensor is, in simple terms, made up of three different layers.

1. The sensor substrate
This is the silicon material, which measures the light intensity. The sensor is not actually flat, but has tiny cavities, like wells, that trap the incoming light and allow it to be measured. Each of these wells or cavities is a pixel.

2. A Bayer filter
This is a colour filter that is bonded to the sensor substrate to allow colour to be recorded. The sensor on its own can only measure the number of light photons it collects. It has no way of determining the colour of those photons. As such, the sensor itself can only record in monochrome.

The Bayer filter was derived by Dr. Bryce E. Bayer, a scientist working for Eastman's Kodak. He invented the particular Red, Green and Blue arrangement of colour filters to capture colour information. Because of the alternating Red/Green and Blue/Green arrangement, it is sometimes called an RGBG filter.

The Bayer filter, often called the Colour Filter Array, or CFA, acts as a screen, only allowing light photons of a certain colour into each pixel on the sensor.

If you look at the diagram of the Bayer pattern (left), you will see it is made up of alternating rows of Red/Green and Blue/Green filters. The red filters, for example, will only allow red light photons to pass into the pixel below it.

Similarly, the green and blue filters, will only allow green and blue light, respectively, to pass into the pixels below.
In this way, when the pixel measures the number of light photons it has captured, it knows that every photon is of a certain colour. For example, if a pixel that has a red filter above it has captured 5000 photons, it knows that they are all photons of red light, and it can therefore begin to calculate the brightness of red light at that point.


A sensor is composed of millions of light sensitive areas or pixels. These can be thought of as a group of buckets, into which the light falls and is trapped. The number of light rays falling into each bucket determines the brightness level at each pixel. Once the bucket is full, the light level is said to be 'blown'.



The buckets in the top image cannot measure the colour of the light, only the intensity. By placing a different coloured primary filter over each bucket, only light of that colour is captured. Each line of pixels has only two of the three primary colours, either red and green or blue and green.


There is space between each light sensitive bucket on the sensor. This is where some of the on-chip electronics are located. Any light falling on this area would be wasted as it could not be recorded, but microlenses placed above the filter help direct light into one or other of the adjacent pixels.

3. A microlens
This tiny lens sits above the Bayer filter and helps each pixel capture as much light as possible. The pixels do not sit precisely next to each other-there is a tiny gap between them. Any light that falls into this gap is wasted light, and will not be used for the exposure. The microlens aims to eliminate this light waste by directing the light that falls between two pixels into one or other of them.

Full colour
If you've read everything so far very carefully, and had a good look at the picture of a Bayer pattern filter, you may have noticed that there are twice as many green squares as there are red or blue. This is because the human eye is much more sensitive to green light than either red or blue, and has a much greater resolving power in that range.

Similarly, you may also have wondered how the full colour image is created, if each pixel can only record a single colour of light. Surely, each pixel is missing two thirds of the colour data needed to make a full colour image?

Indeed it is, but due to some very clever algorithms within the camera, it succeeds in working out the full-colour for each pixel. The method used is called 'demosaicing', and is very complex. However, in simple terms, the camera treats each 2x2 set of pixels as a single unit. This provides one red, one blue and two green pixels, and the camera can then estimate the actual colour based on the photon levels in each of these four wells.


Look at the diagram above. In that 2 x 2 square of four pixels, each pixel contains a single colour, either red, green or blue. Let's call them G1, B1, R1, G2.

At the end of the exposure, when the shutter has closed and the pixels are full of photons, they start their calculations.

If we take a single pixel, G1, this is what happens. G1 talks to B1, finds out how many blue photons it has got and adds them to its green. G1 then talks to R1 and G2 and does the same thing. G1 then has a complete set of primary colour data, from which
it can build the full colour for its place on the sensor. At the same time as acquiring data from its neighbouring pixels, G1 is also giving its data to them so they can perform the same calculations.

This is only half the story as it only considers a single pixel in a 2x2 grid. If you now image a pixel in the middle of a 3x3 grid, it can take the data from more pixels.


Based on the standard Bayer pattern, if the pixel in the centre is green (above), the surrounding pixels will be made up of two blue pixels, two red pixels and four green pixels.



If it is a red pixel in the centre (above), it will have four blue pixels and four green pixels around it.



If it is a blue pixel in the centre (above), it will be surrounded by four green pixels and four red pixels.

This still isn't the entire story, but exactly how cameras make up their full colour data is a closely guarded secret. You can assume that every single pixel is used by at least eight other pixels so that each can create a full panoply of colour data.

Effective pixels
What happens to the pixels right at the edge of a sensor? If they are the very edge pixel, they don't have as many surrounding pixels from which to borrow information, so their colour data is not quite as accurate. This is the difference between actual pixels and effective pixels.

The actual number of pixels on a sensor is the total number of pixels. However, not all of these are used in forming the image. Those at the edge are ignored by the camera in forming the image, but their data is used by those further from the edge. This means that every pixel used in forming the image uses the same number of pixels to create its colour data.

This is why, when reading camera specifications you
might see 'effective pixels 10.1 million, total pixels 10.5 million.

These extra 400,000 pixels in the total megapixels are the ones used to create colour data information, but are not used in forming part of the final image.

The sensor in a camera has more pixels than are used to form the image. These extra pixels are used to improve the colour data in the image.



Taken from an article from a Canon publication

For more information, visit the website at
www.canon.co.uk

FUTEK Mini Load Cells


 

Home - Search - Suppliers - Links - New Products - Catalogues - Magazines
Problem Page - Applications - How they work - Tech Tips - Training - Events -