Things You Probably Don’t Know About Your Digital Camera

Another periphery post about imaging as a recent project brought it into my thoughts. It’s always nice to have broadly interesting, non-confidential material to talk about on here, and while this is well known to many, I’ve discovered in some projects that many in this field aren’t aware of this magic happening in their devices.

To get right to it.

Your Camera’s Imaging Resolution is Lower Than Advertised

The term “pixel” (picture element) is generally holistic — a discretely controlled illumination of any visible shade of color, whatever the mechanism of achieving that result. Your 1080p television or computer monitor has ~1920 by 1080 pixels, or about 2 million “pixels”. If you look closely, however, your monitor has discrete subpixels for the emissive colors (generally Red-Green-Blue, though there is a display technology that adds yellow as well for an RGBY subpixel arrangement). These might be horizontally oriented, vertical, staggered, in a triangle pattern…in any order.

At appropriate combinations of resolution density and viewing distance, your eyes naturally demosaic, blending the individual colors into a discrete full color representation.

The vast majority of digital cameras, however, use the term pixel in a very different way.

To visually explain, here’s a portion of RAW sensor data taken directly from a Nexus 6p. The only processing applied was source color channel gains and scaling from the original 100 by 63 to a more visible 600 by 378.

car_bayer_closer

In the digital camera world, each of these discrete colors is a whole pixel.

If you inspect the pixels you’ll notice that they aren’t full color (though that is obvious just by eyeballing the image). Each unique location is one of either red, green, or blue, at varying intensities. There are no mixes of the colors.

The imaging sensor has pits that can measure photons, but they have no awareness of wavelength. To facilitate color measurements a physical color filter is overlaid over each pit, alternating between the three colors. This is generally a Bayer color filter.

There is another type of sensor that layers wavelength sensitive silicon (much like the layers of classic film), capturing full color at each site, however it is very rarely used and has its own problems.

Green is most prevalent, comprising 50% of the pixels given that it’s the color band where the human eye is most sensitive to intensity changes and detail. Red alternates with green on one line, while Blue alternates with green on the next.

The functional resolution of detailed color information, particularly in the red and blue domains, is much lower than many believe (because of which many devices have physical and processing steps — e.g. anti-aliasing — that further reduce the usable resolution, blurring away the defects).

The Nexus 6P ostensibly has a 4032 x 3024 imaging resolution, but really, courtesy of the Bayer filter, has a 2016 x 3024 green resolution, a 2016 x 1512 blue resolution, and a 2016 x 1512 red resolution. For fine hue details the resolution can be 1/4 expectations, and this is why fully zoomed in pictures are often somewhat disappointing (also courtesy of processing and filtering to try to mask the color channel information deficiencies).

Your Camera’s Imaging Sensor Has Many Defects

Due to defects in silicon, the application of the physical bayer filter, and electrical gain noise, many of the photo sites on your digital sensor are defective.

Some read nothing, while many more see ghosts, reporting some or significant false readings. Readings of a constant brightness target will vary, sometimes significantly, across pixels (yielding a grainy, noisy output image).

falsereadings

This is a random 150 pixel wide reading from the 6p when taking a 1/10s picture of pure darkness. These defective readings cover the entire capture in varying densities, comprising up to hundreds of false data points. Most are permanent, often with new ones appearing as the device ages. Some defects temporarily worsen when the sensor is warm. Most SLRs have a special mode where it will take a full darkness picture and then catalog and remove all hot pixels from the output material. Android also has the notion of remembering hot pixels.

This is the case with every digital sensor, from your smartphone to your high end SLR. I remember being somewhat horrified first looking at a wholly unprocessed RAW image from my SLR, seeing hundreds of fully lit pixels scattered across the image.

Algorithms Saves The Day

The solution to all of these problems is processing, but it does have consequences.

Hot pixels are eliminated both through prior knowledge (a hot pixel database for a given sensor), and through simply eliminating pixels that shine a little too bright relative to her neighbors. They get replaced with an interpolated average of neighbors.

The Bayer pattern source is turned into a full color image via a demosaicing algorithm, and there is considerable academic research into finding the optimal solution. In that case I linked to an army research paper, the military having a significant interest in this field given the broad use of Bayer imaging sensors, and a need to know that the resulting images/data are the highest fidelity possible (especially given that machine vision systems are then analyzing that resulting heavily processed output, and with the wrong choices can be triggering on algorithm detritus and side-effects).

The choice of demosaicing algorithm can have a significant impact on the quality of the resulting image. Do you know what algo your device is using?

After demosaicing, color corrections are applied (both to move between color spaces, and to provide white point corrections), and then the image is de-noised — those fine grainy variations are homogenized (which can yield unique results if the subject itself has a grainy appearance — the algorithm can’t discern whether variations are from the source or from the sensor).

The resulting image is generally close to perceptually perfect, but an enormous amount of human knowledge and guesswork went into turning some very imperfect source data into a good result. The quality of an image from a digital device is as significantly impacted by software as the hardware (many devices have terrible color fringing courtesy of poor demosaicing). Which is why many choose to shoot RAW photos, saving those source single-band pixels as is before destructively applying corrections. This allows for improvements or alterations of algorithms when the magic mix didn’t work quite right for a given photo.

If you look closely at the results, you start to see the minor compromises necessary to yield a workable output.