Things You Probably Don’t Know About Your Digital Camera

Another periphery post about imaging as a recent project brought it into my thoughts. It’s always nice to have broadly interesting, non-confidential material to talk about on here, and while this is well known to many, I’ve discovered in some projects that many in this field aren’t aware of this magic happening in their devices.

To get right to it.

Your Camera’s Imaging Resolution is Lower Than Advertised

The term “pixel” (picture element) is generally holistic — a discretely controlled illumination of any visible shade of color, whatever the mechanism of achieving that result. Your 1080p television or computer monitor has ~1920 by 1080 pixels, or about 2 million “pixels”. If you look closely, however, your monitor has discrete subpixels for the emissive colors (generally Red-Green-Blue, though there is a display technology that adds yellow as well for an RGBY subpixel arrangement). These might be horizontally oriented, vertical, staggered, in a triangle pattern…in any order.

At appropriate combinations of resolution density and viewing distance, your eyes naturally demosaic, blending the individual colors into a discrete full color representation.

The vast majority of digital cameras, however, use the term pixel in a very different way.

To visually explain, here’s a portion of RAW sensor data taken directly from a Nexus 6p. The only processing applied was source color channel gains and scaling from the original 100 by 63 to a more visible 600 by 378.

car_bayer_closer

In the digital camera world, each of these discrete colors is a whole pixel.

If you inspect the pixels you’ll notice that they aren’t full color (though that is obvious just by eyeballing the image). Each unique location is one of either red, green, or blue, at varying intensities. There are no mixes of the colors.

The imaging sensor has pits that can measure photons, but they have no awareness of wavelength. To facilitate color measurements a physical color filter is overlaid over each pit, alternating between the three colors. This is generally a Bayer color filter.

There is another type of sensor that layers wavelength sensitive silicon (much like the layers of classic film), capturing full color at each site, however it is very rarely used and has its own problems.

Green is most prevalent, comprising 50% of the pixels given that it’s the color band where the human eye is most sensitive to intensity changes and detail. Red alternates with green on one line, while Blue alternates with green on the next.

The functional resolution of detailed color information, particularly in the red and blue domains, is much lower than many believe (because of which many devices have physical and processing steps — e.g. anti-aliasing — that further reduce the usable resolution, blurring away the defects).

The Nexus 6P ostensibly has a 4032 x 3024 imaging resolution, but really, courtesy of the Bayer filter, has a 2016 x 3024 green resolution, a 2016 x 1512 blue resolution, and a 2016 x 1512 red resolution. For fine hue details the resolution can be 1/4 expectations, and this is why fully zoomed in pictures are often somewhat disappointing (also courtesy of processing and filtering to try to mask the color channel information deficiencies).

Your Camera’s Imaging Sensor Has Many Defects

Due to defects in silicon, the application of the physical bayer filter, and electrical gain noise, many of the photo sites on your digital sensor are defective.

Some read nothing, while many more see ghosts, reporting some or significant false readings. Readings of a constant brightness target will vary, sometimes significantly, across pixels (yielding a grainy, noisy output image).

falsereadings

This is a random 150 pixel wide reading from the 6p when taking a 1/10s picture of pure darkness. These defective readings cover the entire capture in varying densities, comprising up to hundreds of false data points. Most are permanent, often with new ones appearing as the device ages. Some defects temporarily worsen when the sensor is warm. Most SLRs have a special mode where it will take a full darkness picture and then catalog and remove all hot pixels from the output material. Android also has the notion of remembering hot pixels.

This is the case with every digital sensor, from your smartphone to your high end SLR. I remember being somewhat horrified first looking at a wholly unprocessed RAW image from my SLR, seeing hundreds of fully lit pixels scattered across the image.

Algorithms Saves The Day

The solution to all of these problems is processing, but it does have consequences.

Hot pixels are eliminated both through prior knowledge (a hot pixel database for a given sensor), and through simply eliminating pixels that shine a little too bright relative to her neighbors. They get replaced with an interpolated average of neighbors.

The Bayer pattern source is turned into a full color image via a demosaicing algorithm, and there is considerable academic research into finding the optimal solution. In that case I linked to an army research paper, the military having a significant interest in this field given the broad use of Bayer imaging sensors, and a need to know that the resulting images/data are the highest fidelity possible (especially given that machine vision systems are then analyzing that resulting heavily processed output, and with the wrong choices can be triggering on algorithm detritus and side-effects).

The choice of demosaicing algorithm can have a significant impact on the quality of the resulting image. Do you know what algo your device is using?

After demosaicing, color corrections are applied (both to move between color spaces, and to provide white point corrections), and then the image is de-noised — those fine grainy variations are homogenized (which can yield unique results if the subject itself has a grainy appearance — the algorithm can’t discern whether variations are from the source or from the sensor).

The resulting image is generally close to perceptually perfect, but an enormous amount of human knowledge and guesswork went into turning some very imperfect source data into a good result. The quality of an image from a digital device is as significantly impacted by software as the hardware (many devices have terrible color fringing courtesy of poor demosaicing). Which is why many choose to shoot RAW photos, saving those source single-band pixels as is before destructively applying corrections. This allows for improvements or alterations of algorithms when the magic mix didn’t work quite right for a given photo.

If you look closely at the results, you start to see the minor compromises necessary to yield a workable output.

Optical vs Electronic Image Stabilization

The recently unveiled Google Pixel smartphones features electronic image stabilization in lieu of optical image stabilization, with Google reps offering up some justifications for their choice.

While there is some merit to their arguments, the contention that optical image stabilization is primarily for photos is inaccurate, and is at odds with the many excellent video solutions that feature optical image stabilization, including the competing iPhone 7.

Add that video is nothing more than a series of pictures, a seemingly trite observation that will have apparent relevance later in this piece.

This post is an attempt to explain stabilization techniques and their merits and detriments.

Why should you listen to me? I have some degree of expertise on this topic. Almost two years ago I created an app for Android1 that featured gyroscope-driven image stabilization, with advanced perspective correction and rolling shutter compensation. It offers sensor-driven Electronic Image Stabilization for any Android device (with Android 4.4+ and a gyroscope).

It was long the only app that did this for Android (and to my knowledge remains the only third-party app to do it). Subsequent releases of Android included extremely rudimentary EIS functionality in the system camera. Now with the Google Pixel, Google has purportedly upgraded the hardware, paid attention to the necessity of reliable timing, and offered a limited version for that device.

They explain why they could accomplish it with some hardware upgrades-

“We have a physical wire between the camera module and the hub for the accelerometer, the gyro, and the motion sensors,” Knight explains. “That makes it possible to have very accurate time-sensing and synchronization between the camera and the gyro.”

We’re talking about gyroscope data triggering 200 times per second, and frames of video in the 30 times per second range. The timing sensitivity is in the 5ms range (that the event sources are timestamped within that range of accuracy). This is a trivial timing need and should need no special hardware upgrades to be accomplished. The iPhone has had rock solid gyroscope timing information going back many generations, along with rock solid image frame timing. It simply wasn’t a need for Google, so the poor design of the timing insanity was the foundation of data on Android (and let me be clear that I’m very pro-Android. I’m pro all non-murdery technology, really. This isn’t advocacy or flag-waving for some alternative: it’s just an irritation that something so simple became so troublesome and wasted so much of my time).

Everyone is getting in on the EIS stabilization game now, including Sony, one of the pioneers of OIS, with several of their new smartphones, and even GoPro with their latest device (their demos again under the blazing midday sun, and still they’re unimpressive). EIS lets you use a cheaper, thinner, less complex imaging module, reducing the number of moving parts (so better yields and reliability over time. Speaking of which, I’ve had two SLR camera bodies go bad because the stabilized sensor system broke in some way).

A number of post-processing options have also appeared (e.g. using only frame v frame evaluations to determine movement and perspective), including Microsoft’s stabilization solution, and the optional solution built right into YouTube.

There are some great white papers covering the topic of stabilization .

Let’s get to stabilization techniques and how EIS compares with OIS.

With optical image stabilization, a gyro sensor package is coupled with the imaging sensor. Some solutions couple this with some electromagnets to move the lens, other solutions move the sensor array, while the best option (there are optical consequences of moving the lens or sensor individually, limiting the magnitude before there are negative optical effects) moves the entire lens+sensor assembly (frequently called “module tilt”), as if it were on a limited range gimbal. And there are actual gimbals2 that can hold your imaging device and stabilize it via gyroscope directed motors.

A 2-axis OIS solution corrects for minor movements of tilt or yaw — e.g. tilting the lens down or up, or tilting to the sides — the Nexus 5 came with 2-axis stabilization, although it was never well used by the system software, and later updates seem to have simply disabled it altogether.

More advanced solutions add rotation (roll), which is twisting the camera, upping it to a 3-axis solution. The pinnacle is 5-axis which also incorporate accelerometer readings and compensates for minor movements left or right, up and down.

EIS also comes in software 2-, 3- and 5-axis varieties: Correlate the necessary sensor readings with the captured frames and correct accordingly. My app is 3-axis (adding the lateral movements was too unreliable across devices, not to mention that while rotational movements could be very accurately corrected and perspective adjusted, the perspective change of lateral movements is a non-trivial consideration, and most implementations are naive).

With an OIS solution the module is trying to fix on a static orientation so long as it concludes that any movement is unintended and variations fall within its range of movement. As you’re walking and pointing down the street, the various movements are cancelled out as the lens or sensor or entire module does corrective counter-movements. Built-in modules have a limited degree of correction — one to two degrees in most cases, so you still have to be somewhat steady, but it can make captures look like you’re operating a steadicam.

An OIS solution does not need to crop to apply corrections, and doesn’t need to maintain any sort of boundary buffer area. The downside, however, is that the OIS system is trying to guess, in real time, the intentions of movements, and will often initially cancel out the onset of intentional movements: As you start to pan the OIS system will often counteract the motion, and then rapidly try to “catch up” and move back to the center sweet spot where it has the maximum range of motion for the new orientation.

The imaging sensor in OIS solutions is largely looking at a static scene, mechanically kept aligned.

With an EIS solution, in contrast, the sensor is fixed, and is moving as the user moves. As the user vibrates around and sways back and forth, and so on, that is what the camera is capturing. The stabilization is then applied either in real-time, or as a post-processing step.

A real-time EIS system often maintains a fixed cropping to maintain a buffer area (e.g. only a portion of the frame is recorded, allowing the active capture area to move around within the buffer area without changing digital zoom levels), and as with OIS solution it predictively tries to infer the intentions of movements. From the demo video Google gave, their system is real-time (or with a minimal number of buffer frames), yielding the displeasing shifts as it adjusts from being fixed on one orientation to transitioning to the next fixed orientation (presumably as range of movement started to push against the edge of the buffer area), rather than smoothly panning between.

A sensor-driven post-processing EIS system, which is what Gallus is, captures the original recording as is, correlating the necessary sensor data and using attributes of the device (focal length, sensor size, field of views, etc) in post processing can evaluate the motion with a knowledge of the entire sequence, using low-pass filters and other smoothing techniques to make a movement spline within the set variability allowance.

Let’s start with an illustrative sample. Moments before writing this entry, the sun beginning to set on what was already a dreary, dim day, I took a walk in the back of the yard with my app, shooting a 4K video on my Nexus 6p. Here it is (it was originally recorded in h265 and was transcoded to h264, and then YouTube did its own re-encoding, so some quality was lost) –

This is no “noon in San Francisco” or “ride on the ferry” sort of demo. It’s terrible light, subjects are close to the camera (and thus have a high rate of relative motion in frame during movements) and the motions are erratic and extreme, though I was actually trying to be stable.

Here’s what Gallus — an example of sensor-driven post-processing EIS — yielded when it stabilized the result.

I included some of the Gallus instrumentation for my own edification. Having that sort of informational overlay on a video is an interesting concern because it conflicts with systems that do frame-v-frame stabilization.

Next up is YouTube and their frame-v-frame stabilization.

YouTube did a pretty good job, outside of the random zooming and jello effect that appears in various segments.

But ultimately this is not intending to be a positive example of Gallus. Quite the opposite, I’m demonstrating exactly what is wrong with EIS: Where it fails, and why you should be very wary of demonstrations that always occur under perfect conditions. And this is a problem that is common across all EIS solutions.

A video is a series of photos. While some degree of motion blur in a video is desirable when presented as is, with all of the original movements, as humans we have become accustomed to blur correlating with motion — a subject is moving, or the frame is moving. We filter it out. You likely didn’t notice that a significant percentage of the frames were blurry messes (pause at random frames) in the original video, courtesy of the lower-light induced longer shutter times mixed with device movements.

Stabilize that video, however, and motion blur of a stabilized frame3 is very off-putting. Which is exactly what is happening above: Gallus is stabilizing the frame perfectly, but many of the frames it is stabilizing were captured during rapid motion, the entire frame marred with significant motion blur.

Frames blurred by imaging device movement are fine when presented in their original form, but are terrible when the motion is removed.

This is the significant downside of EIS relative to OIS. Where individual OIS frames are usually ideal under even challenging conditions, such as the fading sun of a dreary fall day, captured with the stability of individual photos, EIS is often working with seriously compromised source material.

Google added some image processing to make up for the lack of OIS for individual photos — taking a sequence of very short shutter time photos in low light, minimizing photographer motion, and then trying to tease out a usable image from the noise — but this isn’t possible when shooting video.

An EIS system could try to avoid this problem by using very short exposure times (which itself yields a displeasing strobe light effect) and wide apertures or higher, noisier ISOs, but ultimately it is simply a compromise. To yield a usable result other elements of image capture had to be sacrificed.

But the Pixel surely does it better than your little app!” you confidently announce (though they’re doing exactly the same process), sitting on your Pixel order and hoping that advocacy will change reality. As someone who has more skin in this game than anyone heralding whatever their favorite device happens to have, I will guarantee you that the EIS stabilization in the Pixel will be mediocre to unusable in challenging conditions (though the camera will naturally be better than the Nexus 6p, each iteration generally improving upon the last, and is most certainly spectacular in good conditions).

Here’s a review of the iPhone 7 (shot with an iPhone 7), and I’ll draw your attention to the ~1:18 mark — as they walk with the iPhone 7, the frame is largely static with little movement, and is clear and very usable courtesy of OIS (Apple combines minimal electronic stabilization with OIS, but ultimately the question is the probability that static elements of the scene are capturing the majority of a frame’s shutter time on a fixed set of image sensors, and OIS vastly improves the odds). As they pan left, pause and view those frames. Naturally, given the low light, with significant relative movement of the scene it’s a blurry mess. On the Pixel every frame will be like this under that sort of situation presuming the absence of inhuman stability or an external stabilizer.

I’m not trying to pick specifically on the Pixel, and it otherwise looks like a fantastic device (and would be my natural next device, having gone through most Nexus devices back to the Nexus One, which replaced my HTC Magic/HTC Dream duo), but in advocating their “an okay compromise in some situations” solution, they went a little too far with the bombast. Claiming that OIS is just for photos is absurd in the way they intended it, though perhaps it is true if you consider a video a series of photos, as I observed at the outset.

A good OIS solution is vastly superior to the best possible EIS solution. There is no debate about this. EIS is the cut-rate, discount, make-the-best-of-a-bad-situation compromise. That the Pixel lacks OIS might be fine on the streets of San Francisco at noon, but it’s going to be a serious impediment during that Halloween walk, in Times Square at night, or virtually anywhere else where the conditions aren’t ideal.

The bar for video capture has been raised. Even for single frame photography any test that uses static positioning is invalid at the outset: It doesn’t matter if the lens and sensor yield perfect contrast and color if it’s only in the artificial scenario where the camera and subject are both mounted and perfectly still, when in the real world the camera will always be swaying around and vibrating in someone’s hands.

Subjects moving in frame of course will yield similar motion blur on both solutions, but that tends to be a much smaller problem in real world video imaging, and tends to occur at much smaller magnitudes. When you’re swaying back and forth with a fixed, non-OIS sensor, the entire frame is moving across differing capture pixels at a high rate of speed, versus a small subject doing a usually small in frame motion. They are a vastly different scale of problem.

The days of shaky cam action are fast fading, and the blurry cam surrogates are the hanger-ons. The best option is a stabilized rig (but seriously). Next up is 5-axis optical image stabilization, and then its 3-axis cousin. Far behind is sensor-driven EIS. In last place are post-processing frame versus frame comparison options (they often falter in complex scenes, but will always be demoed with a far off horizon line in perfect conditions, with gentle, low frequency movements).

Often on-camera OIS will be augmented with minimal EIS — usually during transition periods when OIS predicted the future wrong (to attempt to resolve the rapid catch-up), and also to deal with rolling shutter distortion.

To explain rolling shutter distortion, each line of the CMOS sensor is captured and read individually and sequentially, so during heavy movement the frame can skew because the bottom of the scene was pulled from the sensor as much as 25ms after the beginning line of the scene (as you pan down things compress to be smaller, grow when panning up, and skew left and right during side movements). So during those rapid transition periods the camera may post process to do some gentle de-skewing, with a small amount of overflow capture resolution to provide the correction pixels. Rolling shutter distortion is another interesting effect because it’s a pretty significant problem with every CMOS device, but it didn’t become obvious until people started stabilizing frames.

And to digress for a moment, the hero of an enormous amount of technology progress in the past ten years are the simple, unheralded MEMS gyroscopes. These are incredible devices, driven by a fascinating principle (vibrating structures reacting to the Coriolis effect), and they’re the foundation of enormous technology shifts. They’re a very recent innovation as well. Years ago it was stabilized tank turrets that had this technology (okay, some murdery technology is pretty cool), courtesy of giant, expensive mechanical gyroscopes. Now we have cheap little bike mount gimbals doing the same thing.

For the curious, here’s the unzoomed original corrections as applied by Gallus. It was a fun, technically challenging project, despite the frustrations and sleepless nights. It began with some whiteboard considerations of optics and how they work, then field of view, sensor sizes, offset calculations, and so on.

1 – Developing Gallus presented unnecessary challenges. Android lacks reliable event timing, though that has improved somewhat in recent iterations. A lot of the necessary imaging metadata simply didn’t exist, because Google had no need for it (and this is a problem when sharecropping on a platform). As they got new interests, new functionality would appear that would expose a little more of the basic underlying hardware functionality (versus starting with a logical analysis of what such a system would consist of and designing a decent foundation). The whole camera subsystem is poorly designed and shockingly fragile. The number of Android handsets with terrible hardware is so high that building such an advanced, hardware-coupled application is an exercise in extreme frustration.

And given some cynical feedback, note that this post is not a plea for people to use that app (this is not a subtle ad), though that should be obvious given that I start by telling you that the very foundation of EIS is flawed. I go through occasional spurts of updating the app (occasionally breaking it in the process), and having users bitching because an update notice inconvenienced their day kind of turned me off of the whole “want lots of users” thing, at least for an app as so “edge of the possible” as Gallus.

2 – While this post was ostensibly about the Pixel EIS claims, I was motivated to actually write it after seeing many of the comments on this video. That bike run, shot with a “Z1-ZRider2” actively stabilized gimbal (not a pitch for it — there are many that are largely identical) is beautifully smoothed, so it’s interesting to see all of the speculation about it being smoothed in post (e.g. via some frame-v-frame solution). Not a chance. If it was shot unstabilized or with EIS (which is, for the purpose of discussion, unstabilized) it would have been a disastrous mess of blurred foliage and massive cropping, for the reasons discussed, even under the moderate sun. Already objects moving closer to the lens (and in a frame relative sense faster) are heavily blurred, but the entirety of the scene would have that blur or worse minus mechanical stabilization.

There is an exaggerated sense of faith in what is possible with post-process smoothing. Garbage in = garbage out.

3 – One of my next technical challenge projects relates to this. Have a lot of cash and want to fund some innovation? Contact me.

Bokeh and Your Smartphone – Why It’s Tough To Achieve Shallow Depths of Field

I’m going outside of the normal dogma today to focus on a field that I am a bit of a hobbyist in: I’m sort of a photography nerd, primarily interested in the fascinating balance-of-compromises that are optics, so I’m in my domain a bit with this piece on photography and depth of field. There are many great articles on depth of field out there, but in this I’m primarily focused on the depth of field issue of smartphones, and the often futile quest for bokeh.


Bokeh is the photography effect where background lights and details are diffuse and out of focus, as seen in these Flickr photos. Often it’s contrasted against a sharply focused foreground subject, providing an aesthetically pleasing, non-distracting backdrop.

To photographers this has always been called a shallow depth of field: with a large aperture and a longer focal length it is the normal technique to isolate a subject from the background, and is the mainstay of photography. The term “bokeh” has taken root among many courtesy of a late-90s photography how-to magazine article, so you’ll come across it frequently. Some purists hold it to only talk specifically about blurred light patterns, while in general parlance it just means “out of focus background”.

The best known mechanism to control the depth of field on a given piece of imaging hardware is the aperture (aka the f-stops) and distance to subject (the closer to the lens, the shorter the depth of field), each new device seemingly offering a wider aperture to enhance the options.

The Nexus 6p has an f/2 capable camera. The iPhone 6S offers f/2.2 29mm equivalent, and the new iPhone 7 pushes new boundaries with an f/1.8 lens (28mm equivalent, with a second 56mm equivalent on the 7+).

The ultimate portrait lens in the 35mm world is an 85mm f/1.4 lens.

On a traditional SLR-type camera, f/1.8 is a very wide aperture (aside – a “small” aperture has a larger number, e.g. f/22, while a wide or “large” aperture has a smaller number, such as f/1.8). If the scene isn’t too bright, or you have some neutral filters handy, it is gravy for a dish full of bokeh.

By now you’ve probably learned that it’s really hard to achieve shallow depths of field with your smartphone unless the subject is unreasonably close to the device (e.g. fish eye distortion of someone’s face), despite those seemingly wide apertures: Most everything is always in focus, so while there isn’t the once endemic problem of the slightly out of focus shots (being sort of close in focus is often good enough), it makes the cool effects tough to achieve. Instead blurry smartphone photography is primarily caused by too low of a shutter speed coupled with moving subjects or a shaky photographer.

But why is that? Why does your f/2 smartphone yield such massive depths of field, making bokeh so difficult? Why isn’t f/2 = f/2? If you’re coming from the SLR world and install some great manual control photography app on your smartphone, you likely found yourself disappointed that your f/2 smartphone isn’t delivering what you’re accustomed to elsewhere.

Because of the in f/2. While it is treated like a abstract value holder, it literally means “focal length / 2”.

And the focal length on smartphones is why you are separated from your bokeh dreams. While my Nexus 6p has a 28mm equivalent (compared to the 35mm camera benchmark) focal length, it’s actually a 4.67mm focal length. Courtesy of the physics of depth of field, its focal length means an f/2 on this device is equivalent to about an f/10 DoF on a 35mm lens when the subject is at the same distance from the lens. The iPhone 6 has a focal length of 4.15mm, while the iPhone 7 offers up lenses of apparently 3.99mm and 7.7mm.

This is easy enough to prove. Here’s an f/2 photo on my Nexus 6p. The subject is about 30cm from the lens.

2016-09-15_11-45-51

Now here’s approximately the same scene, at the same distance, with a zoom lens set at ~28mm on a Canon T2i (approximating the zoom level of the Nexus 6p fixed focal length), the aperture set to f/10.

Canon T2i f/10 @ ~28mm

While each device has its own post-processing (the T2i in this case is set to neutral, while the 6p, like most smartphones, is fairly heavy handed with contrast and saturation), if anything the SLR features as much or more blurring, despite a significantly smaller aperture.

This is the impact of the focal length on the depth of field. Here the same subject shot from the same distance, the zoom lens set to 55mm, the aperture still at f/10. The depth of field collapses further (it isn’t just a crop of the above picture, but instead the DoF shrinks further).

Canon T2i @ 55mm f/10

And for comparison here it is at f/5.6-

T2i - 55mm f/36

So why is this?

First let’s talk about something called the circle of confusion (CoC) to get it out of the way as a parameter of the calculator following. In this discussion the CoC is the amount that a “focused” photon can stray outside of the idealized target before it leads to blur for a given sensor. There are many, many tables of static CoC values, and a lot are very subjective measures (e.g. “if you print this as an 8×10 photo and view it from 3 feet away, what amount of blur is indiscernible). For my calculations I am calculating the CoC as 2 x the pixel stride of the target sensor (via the nyquist theory), but you can use a table or your own mix of crazy as the CoC. I leave that open.

The Nexus 6p has a sensor that is 6.324mm wide, containing 4080 pixels per line (not all pixels are active, so this was measured via the SDK). So a pixel stride of 0.00152794mm, and doubling that we get 0.0030558. That is the CoC I’m using for the Nexus 6p.

We know the focal length (4.67mm), and we know the desired CoC (0.0030558), so let’s calculate something called the hyperfocal distance.

The hyperfocal distance is the focus distance where everything to infinity, and to approximately 1/2 the focus distance, will also be in effectively perfect focus for a given aperture. It is a very important number when calculating DoF, and the further the hyperfocal distance, the shallower the DoF will be for closer subjects.

 

Focal length (mm)
CoC (mm)
f-number
Hyperfocal Distance (mm)

 


Now we know that the hyperfocal distance is No JavaScript? meters for these parameters, and if you change the f-stop, the focal length, or the CoC, the hyperfocal distance will recalculate accordingly. What that means is if a focused subject is that distance from the lens, at those settings, the furthest distances (the mountains on the horizon, the stars in the sky, etc) will still be completely in focus, as will everything from about half the distance before the focus distance as well. It is the hyper-of-focuses, and is a critical number for landscape photographers.

Focusing beyond the hyperfocal distance does nothing to improve distant focus for this CoC, but instead simply unfocuses closer objects. Once again I have to note that CoC is not a fixed constant, and if you had a sensor with 4x the pixels, the CoC by my method would halve and the focus would need to be more precise. Others would argue, with good reason, that the CoC should be percentage of the total span such that the same effect amount is seen across devices, while my measure is achieving technical perfection for a given device.

The hyperfocal distance is the basis of the calculations that allow us to calculate the near and far of the depth of field. Let’s calculate the DoF for a given focus distance. Note that these values are in millimeters, as most things are in the photography world (so instead of 10 feet, enter 3048).

Subject distance (mm)
Near Depth of Field (mm)
Far Depth of Field (mm)

Beyond the near and far depth of field, of course the defocus increases as a multiple of the distance.

If you entered a focal length of 4.67, a CoC of 0.0031, an f-setting of 2.0, and a subject distance of 300 (30cm — the distance in the above picture), the near and far depth of field would calculate to about 276.454 mm to 327.931 mm, meaning everything within that distance from the camera should be focused perfectly on the device, and the further out of those ranges the more defocus is evident. Altering those values for the SLR, with a focal length of 28, a CoC of 0.0086 (the SLR has a much larger sensor), and an f-setting of 10.0, with the same subject distance of 300, yields a smaller depth of field of 290mm to 310mm. A significantly smaller aperture, yet an increased amount of bokeh at a given distance.

All f-stops are not created equal, which is why Apple is artificially simulating bokeh on their newest device (as have other vendors). Your f/1.8 smartphone might provide light advantages, but don’t expect the traditional depth of field flexibility. On the upside, this is the reason why almost all smartphone photography is sharp and in focus.

I love using my smartphone to take photos (or stabilized videos): It is the camera that is always with me, and an enormous percentage of the shots turn out great. When I’m looking for defocus effects I reach for the SLR, however.