Monday, May 4, 2015

How Does a Digital Camera Work?

From Rear Element to Hard Drive: Counting Photons


5mp CMOS Webcam imaging sensor with IR filter removed.  This image is just over 67 micrometers wide. 
My setup for the above photo


Photography comes from the Greek words ϕοτοσ and γραοσ, literally translating to "light writing." That is the core concept behind all photography, and from that perspective, digital photography is no different from the daguerrotypes produced in the 1840's. However, the digital camera sensor is the first photosensitive device that counts photons not as semipermanent physical alterations to a substance, but as electrical charges in silicon.

The Beginning


The first digital imaging device was the Charge-Coupled Device (CCD), invented at Bell labs by Willard Boyle and George E. Smith. The first successful images were produced around 1971, but it was four years until Steven Sasson at Kodak invented the first digital camera which took 23 seconds to record its black-and white images onto a cassette tape.

In 1991 Kodak released the first consumer digital camera, the Kodak DCS 100, by modifying a Nikon F3 to house a digital sensor instead of film. DCS stood for Digital Camera System, because the modified F3 required a large Digital Storage Unit in order to work, which was about the size and shape of a car radio.

A separate imaging technology, called the Complementary Metal Oxide Semiconductor (CMOS) sensor, was developed in 1993 by Eric Fossum at NASA Jet Propulsion Laboratory. CMOS sensors are more complicated than CCD sensors and initially had issues with noise performance and 'fill factor' (the percentage of each pixel that is sensitive to light) so adoption was much slower. However, the manufacturing, temperature, and ISO performance advantages of CMOS sensors eventually won, and CMOS is now the most common technology in digital cameras of all shapes and sizes.

How It Works


At a basic level, both CCD and CMOS sensors use the same magic trick to create measurable electric current from the stream of incoming photons. The crystalline structure of silicon is made up of silicon atoms arranged so that each silicon atom shares two electrons with each of its neighbors. However, impurities can be introduced into silicon through a process called doping, which results in areas with a slightly disproportionate number of electrons: N-type with more electrons, and P-type with fewer.

When this P-N junction is disturbed by an incoming photon, an electron-hole pair is formed that is drawn towards the depletion region in the center of the junction. The movement of the electron-hole pair is an electrical current, known as a drift current. The strength of this drift current is proportional to the number of photons striking the silicon junction so it is what we measure to determine the intensity of light at that pixel location.


After a drift current is created in the silicone, it must be converted to a digital signal so that it can be saved to the camera's memory, processed (even a raw .NEF file—Nikon Electronic Format—gets processed, optionally compressed, and combined with other non-pixel data) then written to the SD or CompactFlash storage. The way this analog to digital conversion takes place is the difference between CCD and CMOS sensors.

The CCD sensor works like a bucket brigade at a fire; when photons create a charge in the silicon P-N junction, that charge is passed down the line of pixels through the silicone to be amplified and recorded at the end of each pixel row. This bucket-brigade design means that each individual pixel is very simple and 100% light-sensitive by area.

A CMOS sensor, on the other hand, has conversion and amplification circuits on every pixel. This increases readout and processing speed and reduces overall power consumption, but does have some downsides: the increased complexity blocks some light, increases heat, and increases electrical noise. This is why a CMOS Nikon DSLR will show hot pixels at exposures longer than 30 seconds, but ccd cameras can expose for an hour without having similar severe noise or heat issues.

But What About Color?


P-N silicone junctions can only determine luminance—the intensity of the incoming light—at a particular location. Therefore, color cannot be directly measured and must be interpolated. Each pixel in a digital camera is covered with a single color filter, commonly red, green, and blue but sometimes yellow or white, which allows each pixel to represent the intensity of one color in that location. By arranging different filters next to each other, it is possible to interpolate all three channels of a color at a hypothetical location in the center of the original pixels.

Color filter arrays have a long history, predating digital sensors and even color film. Some of the earliest attempts at reproducing color images used rudimentary color filters on black and white film so that the images could later be projected by three projectors with colored lenses.

Bayer filter array, drawn and rendered in Blender. Each pixel is 4.78 micrometers wide.


The most common color filter array used today is the Bayer filter, invented by Brice Bayer at Eastman Kodak in 1976. Bayer's filter used three colors so that the first row of pixels alternates between red and green, and the next row alternates between green and blue. Green was chosen as the duplicated pixel to more closely match the perception of the human eye—and to fit things into a convenient grid.

As illustrated in the image above, the color filters are capped by tiny lenses—microlenses—that increase efficiency by directing the incoming light into the photosensitive areas of each pixel. This means sensors can capture more light, but is also the reason that lenses intended for film cameras do not perform equally on a digital body: digital sensors perform best when incident light is traveling perpendicular to the plane of the sensor, which was not a concern for engineers designing optics for film cameras.

The image on the left shows a droplet of water on a red surface. The upper left corner shows the raw sensor data (multiplied linearly to a convenient brightness) and the bottom right shows the Bayer filter array pattern overlaid (for illustration purposes) on the luminance values of the sensor—red shows up brightest except on the water droplet where the colors are more balanced.

To create a smooth, colored image, the colors of each pixel must be interpolated. Interpolation is the process of extrapolating the likeliest value for a variable at an arbitrary point based on known, nearby values of that variable—if every pixel nearby is green, its most likely that an unknown pixel in the middle is green as well. Every pixel on a camera sensor is a known value: pixel one tells us how much red hit the area and pixel two tells us how much green hit the area, so we can assume that a combination of red and green light hit those two pixels in the proportion represented by their luminance values.

Interpolating accurate, sharp color is a field of extensive research, but the basic concept is fairly simple. The most basic way is to simply create a hypothetical pixel at the junction of every four pixels, and assume that each of those four pixels accurately represents the R, G, and B components of the light that hit the hypothetical pixel. Said another way, in Python:

from __future__ import division
from PIL import Image
import subprocess
import sys 

##  Use dcraw on the command line to unpack the RAW file 
##  into a format that Python Imaging Library can use
##  The "-b 30" is a brightness fudge factor, which would
##  need to be adjusted per image. 
subprocess.call(["dcraw","-D","-4","-b","30",str(sys.argv[1])])

##  Create a new file 
filename = (str(sys.argv[1])).split(".")[0]
inputimage = Image.open("%s.pgm" %(filename))

##  Grab the image size; create empty pixel list
width, height = inputimage.size 
outputpixels = []

##  For every row, iterate across the pixels:
for y in range(0, height-1):
    for x in range(0, width-1):

        ##  Get the four adjacent pixels:
        ##    A1 A2
        ##    A3 A4

        A1 = inputimage.getpixel((x,y))
        A2 = inputimage.getpixel((x,y+1)) 
        A3 = inputimage.getpixel((x+1,y)) 
        A4 = inputimage.getpixel((x+1,y+1))

        ##  And assign the R,G,B values of a new pixel
        ##  to the brightness of the corresponding 
        ##  bayer filtered pixel
        if x%2 == 0 and y%2 == 0: R,G,B = A1,(A2+A3)/2,A4
        elif x%2 == 0 and y%2 == 1: R,G,B = A2,(A1+A4)/2,A3
        elif x%2 == 1 and y%2 == 0: R,G,B = A3,(A1+A4)/2,A2
        else: R, G, B = A4, (A2+A3)/2, A1

        ##  Crudely convert from 16bpp to 8bpc 
        R, G, B = R/65535*255, G/65535*255, B/65535*255

        ##  Factors to balance 5000k light
        R, G, B = (R*1.8, G*.97, B*1.14)    

        ##  Round to intergers and stick it in the list
        outputpixels.append((int(R), int(G), int(B)))

##  Make a new image, dump the pixels in, and save.
outputimage = Image.new('RGB', (width-1, height-1))
outputimage.putdata(outputpixels)
outputimage.save('debayered.tiff') 

Unpacked greyscale data from NEF file
Color interpolation in Python
 
 

Digital Film


It is not possible to hold or look at a raw digital file the same way you can look at a negative. The examples in this article come close, but still have already been 'processed' slightly by arranging the pixels into an image and brightening them for display on screen—and the screen cannot come anywhere close to displaying all the data at one brightness level. Likewise, you cannot watch the image magically appear in a chemical bath or dodge and burn by hand, but that does not mean a digital file is not undergoing processes just as complex and nuanced as those used to develop film.

These tactile experiences have been lost, but the art remains: just opening the same image in Adobe Camera RAW, Capture One, Aperture, RawTherapee (which allows users to select different interpolation algorithms!) and other RAW processing software will give different results—even before you start "editing" the picture. Photography is and will never be a simple, perfect science, and understanding the technical and mechanical nuances is and will always be imperative to writing beautifully with light.
Developing a RAW file: what the sensor recorded, what the filters looked like, and what Adobe interpolated.

The image above sums up the process: left is the data the camera sensor recorded, center is the Bayer filter array colors overlaid to show what the computer can assume, and right is the reconstructed, color image processed in Adobe Camera Raw on default settings.

Friday, May 1, 2015

Wednesday, April 8, 2015

2,000 Miles in the Ambulance RV Project

80-200mm f/2.8 AF-D @ 86mm f/5.6 1/640 ISO 100

Map of my trip according to GPS data from a few cellphone photos.
This January I was finally able to get out and stretch my new legs: I finally put my ambulance RV conversion project to the test in a longer trip. My vehicle of choice is a 1991 Ford E-350 Medtech Ambulance, purchased off ebay out of Pena, IL for $3,600. This trip pushed the odometer past 40,000 miles—hardly broken in for the 7.3l IDI International engine. I have converted the rear ambulance box to include a folding bed and other small amenities, but I spend as much time as I can traveling, camping, and photographing from the ambulance, so build progress is slow.

17-55mm f/2.8 AF-S @ 55mm f/10 1/400s ISO 100

Storms are fun in the ambulance; 8,000lbs of insulated steel gives plenty of confidence and fact that I don't need to leave the vehicle to transition from sleeping to driving allows me to worry little about the weather—the coldest night I spent on this trip was -8º just east of the Continental Divide. The ambulance does not yet have any heater except the engine.

The view across the bottom of Death Valley—the rim of the valley is so far away that through this 200mm lens it appears distinctly blue.

Darwin Falls, Death Valley National Park.

17-55mm f/2.8 AF-S @ 17mm f/18 1s ISO 100

17-55mm f/2.8 AF-S @ 17mm f/5 1/1000s ISO 100; three-shot panorama

17-55mm f/2.8 AF-S @ 17mm f/4 1/1600 ISO 100

80-200mm f/2.8 AF-D @ 100mm f/8 1/1000 ISO 100






17-55mm f/2.8 AF-S @ 35mm f/9 1/320 ISO 100

17-55mm f/2.8 AF-S @ 17mm f/5 1/1250 ISO 100

80-200mm f/2.8 AF-D @ 80mm f/8 1/1250 ISO 100

The ambulance against the vastness of Death Valley; it's most at home on long road trips—of which there will be many more.

17-55mm f/2.8 AF-S @ 17mm f/9 1/320 ISO 100



Monday, April 14, 2014

Moon Photography: Stellarium, Focal Length, and Exposure Times.

On Apr. 15, there will be a total lunar eclipse visible from almost the entire North American continent. The next total lunar eclipse visible from the east coast will not be until 2015, and the next total lunar eclipse visible from the entire North American continent will not be until 2019, so this is a rare opportunity to get some really unusual photographs.


Here is the field of view one can expect from a few common configurations:


D800E @ 200mm with 2X TC
D7000 @ 200mm with 2X TC
D7000 @ 200mm
D800E @ 200mm
D7000 @ 50mm
D800E @ 50mm


How to make simulations like these for any camera:

(Or skip to exposure time section)

Stellarium is an open source planetarium software that is capable of simulating the sky and celestial bodies at any time as seen anywhere on earth (and from most other known planets and moons). It's a good way to know exactly when and where you should be looking based on your exact location. If you are photographing through a telescope, it also can drive many telescopes to track the moon and other celestial objects.

General Stellarium Setup

Location selection is the top option on the leftmost menu. You can search for a city or enter precise GPS coordinates if you have them, or just click around the map.

The second option on the left toolbar is the date and time, which you should set to about 1:00 on April 15th. On the right side of the bottom toolbar the fastforward, rewind, normal, and current time buttons allow you to travel back and forth in time through the entire event.

Once you have the location, date, and time selected, Stellarium will be displaying a view of the night sky that allows you to zoom and pan around and click on celestial bodies for information.

Camera and Lens Simulation (Oculars plugin) 

However, we're interested in simulating the view through a specific lens onto a specific camera sensor. To do that, we want to use the Oculars plugin. Oculars may be enabled automatically, but if it isn't, it can be enabled under Configuration Window > Plugins > Oculars. Be sure that the "Load at startup" option is selected.

The Oculars configuration window is where you can specify a camera and lens combination. Here's an overview of the important parts of the settings panel:

General As far as this guide is concerned, check all three boxes on top and enable the on-screen interface.
EyepiecesThis is where eyepiece information is specified; it can be ignored for astrophotography because we are targeting a sensor as the final element.
LensesThis is where you should specify a teleconverter, if applicable, not the lens you are using.
SensorsThis accepts information about the exact sensor you're using. Stellarium uses this sensor information to calculate crop factor and other information, but for a quick-and-dirty view, the pixel seze can be omitted (just put in 4.8).
TelescopesThis is where information about your lens goes. Diameter can be omitted (set to 80) if you just want a field of view estimate.
AboutThis one explains itself. That's the point.

Now you should have four buttons in the upper right corner: telescope view, sensor view, scope view, and settings. Selecting sensor view (the rectangle) draws the red rectangle on the screen and opens up a menu where you can select the camera and lens setup you entered earlier. For example, the image on the left shows the simulated view of the setup I plan on using on the 15th.



Exposure Time

Again, I'll offer a cheat-sheet before my derivation:

Nikon D700, APS-C 16.2 mp, with no motion blur:
200mm lens: 1/4s or faster
400mm lens: 1/6s or faster
1250mm lens: 1/20s or faster

Nikon D7100, APS-C 24.1 mp, with no motion blur:
200mm lens: 1/4s or faster
400mm lens: 1/8s or faster
1250mm lens: 1/25s or faster

Nikon D800E, Full frame 36.3 mp, with no motion blur:
200mm lens: 1/4s or faster
400mm lens: 1/8s or faster
1250mm lens: 1/25s or faster

The D7100 and D800E have the same speeds because even though the D800 has more resolution, its pixels are spread out over a wider area then the D7100, and it just happens to round out roughly equal.

In general astrophotography, some people use the "Rule of 600" to estimate how long of an exposure time we can use before the stars start to visibly exhibit motion blur. It states that 600 divided by the 35mm equivalent of the focal length of the lens gives the exposure time in seconds of an acceptably sharp image. For example, the 200mm lens with a 200mm teleconverter on my D7000 is 600mm equivalent, and 600 / 600 = 1, so 1 second is approximately the longest amount of time I should expose the stars. That 400mm lens on my APS-C camera has an angle of view of about 3.34° (2arctan((35)/(2*(1.5*400)))), projected onto 4,928 horizontal pixels, so each degree of view is projected over 1,475 pixels. Therefore, assuming the worst-case scenario is that the fastest stars will appear to move at 0.0042 arc minutes per second (360° over 24 hours), 1 second is 0.0042' of movement, which equals a blur of 6.1 px.

The moon moves 13° more per night then the background stars, so it would have 6.3 pixels of motion blur with the same lens and one second exposure. Because the moon is the entire focus of the image, I think 6.3 px is way too much blur. To reduce the moon-motion blur to three pixels, we would need to expose for no more than 1/2 second, and to remove motion blur entirely, no more than 1/6 seconds.

This means that acceptable exposure times go down drastically with the length of the lens; my 1250mm telescope will have three pixels or more of motion blur at 1/6 second, and it will have to be faster than 1/20s in order to actually freeze the moon.

Keep in mind that my D7000 does not have an incredibly high resolution sensor, and to freeze motion on a higher resolution camera would require faster speeds. The D7100 has 24.1 mp across the same image sensor area, so through the 400mm lens it would require a shutter speed of 1/8 to fully freeze the moon, or 1/25 through the telescope. Achieving proper exposure with these shutter speeds isn't an issue with a brightly lit moon, but when it passes into the sun's shadow these limits may come into play.

Keep in mind that taking photos at faster shutter speeds than these will not improve sharpness, (assuming that the camera is on a tripod and triggered by a remote shutter) so it's better to use these speeds with as low of an ISO as possible instead of boosting ISO to increase shutter speed.

Wednesday, February 26, 2014

Violin

50mm f1.4 @ f/4 1/60s ISO 100 with SB 800 @ 1/1 and SB 600 @ 1/125




Lighting setup: SB 600 (top) with built-in diffuser, SB 800 (bottom) with a styrofoam cup diffuser.

18-55mm f/3.5-f/5.6 @18mm  f/11 1/60s ISO 100 with SB 800 @ 1/1 and SB 600 @ 1/125

Tuesday, February 25, 2014

Loveland Pass

18-55mm f/3.5-f/5.6 @ f/8 8s ISO 400; lit by car headlight sweep

18-55mm f/3.5-f/5.6 @ f/11 1/60s ISO 800 with SB-600 @ 1/1

18-55mm f/3.5-f/5.6 @ f/8 8s ISO 400; lights from a snowplow


18-55mm f/3.5-f/5.6 @ f/11 1/60s ISO 800 with SB-600 @ 1/1

Tuesday, February 18, 2014

Portraits

50mm f1.4 AF-D @ f3.5 1/60s ISO 250


70-200mm f2.8 af-s @ 200mm f2.8 ISO 100 1/640

70-200mm f2.8 af-s @ 95mm f2.8 ISO 100 1/640