Monday, May 4, 2015

How Does a Digital Camera Work?

From Rear Element to Hard Drive: Counting Photons


5mp CMOS Webcam imaging sensor with IR filter removed.  This image is just over 67 micrometers wide. 
My setup for the above photo


Photography comes from the Greek words ϕοτοσ and γραοσ, literally translating to "light writing." That is the core concept behind all photography, and from that perspective, digital photography is no different from the daguerrotypes produced in the 1840's. However, the digital camera sensor is the first photosensitive device that counts photons not as semipermanent physical alterations to a substance, but as electrical charges in silicon.

The Beginning


The first digital imaging device was the Charge-Coupled Device (CCD), invented at Bell labs by Willard Boyle and George E. Smith. The first successful images were produced around 1971, but it was four years until Steven Sasson at Kodak invented the first digital camera which took 23 seconds to record its black-and white images onto a cassette tape.

In 1991 Kodak released the first consumer digital camera, the Kodak DCS 100, by modifying a Nikon F3 to house a digital sensor instead of film. DCS stood for Digital Camera System, because the modified F3 required a large Digital Storage Unit in order to work, which was about the size and shape of a car radio.

A separate imaging technology, called the Complementary Metal Oxide Semiconductor (CMOS) sensor, was developed in 1993 by Eric Fossum at NASA Jet Propulsion Laboratory. CMOS sensors are more complicated than CCD sensors and initially had issues with noise performance and 'fill factor' (the percentage of each pixel that is sensitive to light) so adoption was much slower. However, the manufacturing, temperature, and ISO performance advantages of CMOS sensors eventually won, and CMOS is now the most common technology in digital cameras of all shapes and sizes.

How It Works


At a basic level, both CCD and CMOS sensors use the same magic trick to create measurable electric current from the stream of incoming photons. The crystalline structure of silicon is made up of silicon atoms arranged so that each silicon atom shares two electrons with each of its neighbors. However, impurities can be introduced into silicon through a process called doping, which results in areas with a slightly disproportionate number of electrons: N-type with more electrons, and P-type with fewer.

When this P-N junction is disturbed by an incoming photon, an electron-hole pair is formed that is drawn towards the depletion region in the center of the junction. The movement of the electron-hole pair is an electrical current, known as a drift current. The strength of this drift current is proportional to the number of photons striking the silicon junction so it is what we measure to determine the intensity of light at that pixel location.


After a drift current is created in the silicone, it must be converted to a digital signal so that it can be saved to the camera's memory, processed (even a raw .NEF file—Nikon Electronic Format—gets processed, optionally compressed, and combined with other non-pixel data) then written to the SD or CompactFlash storage. The way this analog to digital conversion takes place is the difference between CCD and CMOS sensors.

The CCD sensor works like a bucket brigade at a fire; when photons create a charge in the silicon P-N junction, that charge is passed down the line of pixels through the silicone to be amplified and recorded at the end of each pixel row. This bucket-brigade design means that each individual pixel is very simple and 100% light-sensitive by area.

A CMOS sensor, on the other hand, has conversion and amplification circuits on every pixel. This increases readout and processing speed and reduces overall power consumption, but does have some downsides: the increased complexity blocks some light, increases heat, and increases electrical noise. This is why a CMOS Nikon DSLR will show hot pixels at exposures longer than 30 seconds, but ccd cameras can expose for an hour without having similar severe noise or heat issues.

But What About Color?


P-N silicone junctions can only determine luminance—the intensity of the incoming light—at a particular location. Therefore, color cannot be directly measured and must be interpolated. Each pixel in a digital camera is covered with a single color filter, commonly red, green, and blue but sometimes yellow or white, which allows each pixel to represent the intensity of one color in that location. By arranging different filters next to each other, it is possible to interpolate all three channels of a color at a hypothetical location in the center of the original pixels.

Color filter arrays have a long history, predating digital sensors and even color film. Some of the earliest attempts at reproducing color images used rudimentary color filters on black and white film so that the images could later be projected by three projectors with colored lenses.

Bayer filter array, drawn and rendered in Blender. Each pixel is 4.78 micrometers wide.


The most common color filter array used today is the Bayer filter, invented by Brice Bayer at Eastman Kodak in 1976. Bayer's filter used three colors so that the first row of pixels alternates between red and green, and the next row alternates between green and blue. Green was chosen as the duplicated pixel to more closely match the perception of the human eye—and to fit things into a convenient grid.

As illustrated in the image above, the color filters are capped by tiny lenses—microlenses—that increase efficiency by directing the incoming light into the photosensitive areas of each pixel. This means sensors can capture more light, but is also the reason that lenses intended for film cameras do not perform equally on a digital body: digital sensors perform best when incident light is traveling perpendicular to the plane of the sensor, which was not a concern for engineers designing optics for film cameras.

The image on the left shows a droplet of water on a red surface. The upper left corner shows the raw sensor data (multiplied linearly to a convenient brightness) and the bottom right shows the Bayer filter array pattern overlaid (for illustration purposes) on the luminance values of the sensor—red shows up brightest except on the water droplet where the colors are more balanced.

To create a smooth, colored image, the colors of each pixel must be interpolated. Interpolation is the process of extrapolating the likeliest value for a variable at an arbitrary point based on known, nearby values of that variable—if every pixel nearby is green, its most likely that an unknown pixel in the middle is green as well. Every pixel on a camera sensor is a known value: pixel one tells us how much red hit the area and pixel two tells us how much green hit the area, so we can assume that a combination of red and green light hit those two pixels in the proportion represented by their luminance values.

Interpolating accurate, sharp color is a field of extensive research, but the basic concept is fairly simple. The most basic way is to simply create a hypothetical pixel at the junction of every four pixels, and assume that each of those four pixels accurately represents the R, G, and B components of the light that hit the hypothetical pixel. Said another way, in Python:

from __future__ import division
from PIL import Image
import subprocess
import sys 

##  Use dcraw on the command line to unpack the RAW file 
##  into a format that Python Imaging Library can use
##  The "-b 30" is a brightness fudge factor, which would
##  need to be adjusted per image. 
subprocess.call(["dcraw","-D","-4","-b","30",str(sys.argv[1])])

##  Create a new file 
filename = (str(sys.argv[1])).split(".")[0]
inputimage = Image.open("%s.pgm" %(filename))

##  Grab the image size; create empty pixel list
width, height = inputimage.size 
outputpixels = []

##  For every row, iterate across the pixels:
for y in range(0, height-1):
    for x in range(0, width-1):

        ##  Get the four adjacent pixels:
        ##    A1 A2
        ##    A3 A4

        A1 = inputimage.getpixel((x,y))
        A2 = inputimage.getpixel((x,y+1)) 
        A3 = inputimage.getpixel((x+1,y)) 
        A4 = inputimage.getpixel((x+1,y+1))

        ##  And assign the R,G,B values of a new pixel
        ##  to the brightness of the corresponding 
        ##  bayer filtered pixel
        if x%2 == 0 and y%2 == 0: R,G,B = A1,(A2+A3)/2,A4
        elif x%2 == 0 and y%2 == 1: R,G,B = A2,(A1+A4)/2,A3
        elif x%2 == 1 and y%2 == 0: R,G,B = A3,(A1+A4)/2,A2
        else: R, G, B = A4, (A2+A3)/2, A1

        ##  Crudely convert from 16bpp to 8bpc 
        R, G, B = R/65535*255, G/65535*255, B/65535*255

        ##  Factors to balance 5000k light
        R, G, B = (R*1.8, G*.97, B*1.14)    

        ##  Round to intergers and stick it in the list
        outputpixels.append((int(R), int(G), int(B)))

##  Make a new image, dump the pixels in, and save.
outputimage = Image.new('RGB', (width-1, height-1))
outputimage.putdata(outputpixels)
outputimage.save('debayered.tiff') 

Unpacked greyscale data from NEF file
Color interpolation in Python
 
 

Digital Film


It is not possible to hold or look at a raw digital file the same way you can look at a negative. The examples in this article come close, but still have already been 'processed' slightly by arranging the pixels into an image and brightening them for display on screen—and the screen cannot come anywhere close to displaying all the data at one brightness level. Likewise, you cannot watch the image magically appear in a chemical bath or dodge and burn by hand, but that does not mean a digital file is not undergoing processes just as complex and nuanced as those used to develop film.

These tactile experiences have been lost, but the art remains: just opening the same image in Adobe Camera RAW, Capture One, Aperture, RawTherapee (which allows users to select different interpolation algorithms!) and other RAW processing software will give different results—even before you start "editing" the picture. Photography is and will never be a simple, perfect science, and understanding the technical and mechanical nuances is and will always be imperative to writing beautifully with light.
Developing a RAW file: what the sensor recorded, what the filters looked like, and what Adobe interpolated.

The image above sums up the process: left is the data the camera sensor recorded, center is the Bayer filter array colors overlaid to show what the computer can assume, and right is the reconstructed, color image processed in Adobe Camera Raw on default settings.

Friday, May 1, 2015