Bill Belichick And The Uselessness of Unreliable Systems

During the normal press call with the head coach of the New England Patriot (an NFL American football team), Bill Belichick, he stated-

As you probably noticed, I’m done with the tablets. I’ve given them as much time as I can give them. They’re just too undependable for me. I’m going to stick with pictures as several of our other coaches do as well because there just isn’t enough consistency in the performance of the tablets, so I just can’t take it anymore

Given that Microsoft paid the NFL $400M to use its Surface hardware, this has become quite a debacle as everyone points and laugh at the Microsoft Surface.

Yet read the rest of Bill’s statement (it’s the final question and answer, and irritatingly has been called a “rant”, a “diatribe”, etc, as our ADD society trends towards being incapable of anything more than a tweet of content #tabletSoBad #paperBetter).

Any rational analysis of his statement yields the conclusion that it isn’t the tablet, but rather the whole that’s the fault: Cameras feed screen grab encoders that populate servers that host plays that are accessed by tablets, all with seconds of leeway. If anything in the stack fails, has congestion or connectivity problems, it’s a critical issue.

The app they currently use is nothing more than a slideshow viewer, so unless Microsoft Consulting was extraordinarily incompetent, if the dependencies are there (reliable wireless, the image server on the other side, etc), any tablet could do this while hardly leaving sleep mode.

Countless pieces can fail, most obviously the enormous risks of wireless technology in a hyper-congested location, and NFL teams have had serious wireless issues for years. Add production deploys of an entire hardware and software stack literally hours before the big event.

This is a recipe for technical failure. This is technical nightmare material.

And to counter another common narrative — Bill is the smartest coach in football and seemingly a pretty clever guy all around, so any retort that relies upon criticizing his technical skills, especially given that we’re talking about an image gallery viewer, rings hollow: A common anecdote is about Bill not setting the clock on his Toyota, which to me means “guy has more important things in life than figuring out where the implementation team at Toyota decided to put the clock settings” (which was the same reason most VCRs blinked 12:00 — it just didn’t matter enough for people to waste even minutes of their life).

NFL coaches need information immediately, and if a hardwired printer and legacy system can give it and a tablet can’t, the tablet loses. In our field we often have a hubris that we think the means justify the ends — it’s technology, and it’s new and cool and fresh, so if it doesn’t work too bad you just need to deal, Luddite.

But they don’t have to just deal. Users can reject your system, which is what Bill did. Go back and make a better solution (Microsoft’s reply, as an aside, is about the worst possible response, and is a cliche of the sort of lame retort when users have issues with bad platforms).

This whole debacle is courtesy of the fact that the NFL seriously restricts technology on the sidelines or in the booth, yielding the whole cloak and dagger last minute availability of locked-down, limited use hardware and accessories. In a future variant of the game I expect teams will have their own information systems, expert and statistical engine analysis, etc. It’ll get there.

Everything You Read About Databases Is Obsolete

Six and a half years ago I wrote (in a piece about NoSQL) –

Optimizing against slow seek times is an activity that is quickly going to be a negative return activity.

This time has long since passed, yet much of the dogma of the industry remains the same as it was back when our storage tier was comprised of 100 IO/second magnetic drives. Many of our solutions still have query engines absolutely crippled by this assumption (including pgsql. mssql has improved, but for years using it on fast storage was as exercise in waste as it endlessly made the wrong assumptions around conserving IOs).

There are now TB+ consumer storage solutions with 300,000 IO/second (3000x the classic magnetic drive, while also offering sequential rates above 3.5GB/s…yes, big G) for under $600. There are enterprise solutions serving 10,000,000 IOPS.

That’s if your solution even needs to touch the storage tier. Memory is so inexpensive now, even on shared hosting like AWS, that all but the largest databases sits resident in memory much of the time. My smartphone has 3GB, and could competently host the hot area of 99.5%+ of operational databases in memory.

For self-hosted hardware, TBs of memory is now economical for a small business, while tens of GBs is operationally inexpensive on shared hosting.

I totally made up that 99.5% stat, but it’s amazing how relatively tiny the overwhelming bulk of databases I encounter in my professional life are, yet how much so many fret about them.

Obviously writes still have to write thru, yet when you remove the self-defeating tactic of trying to pre-optimize by minimizing read IO — eliminating denormalization, combined-storage (e.g. document-oriented), materializations and trigger renderings/precomputing, excessive indexes, etc — in most cases writes narrow to a tiny trickle1.

When writes reduce, not only does practical write performance increase (given that you’re writing much less per transaction, the beneficial yield increases), the effectiveness of memory and tier caches increases as the hot area shrinks, and the attainability of very high performance storage options improves (it’s a lot more economical buying a 60GB high performance, reliable and redundant storage tier than a 6TB system. As you scale up data volumes, often performance is sacrificed for raw capacity).

Backups shrink and systems become much more manageable. It’s easy to stream a replica across a low grade connection when the churn of change is minimized. It’s easy to keep backups validated and up to date when they’re GBs instead of TBs.

Normalize. Don’t be afraid of seeks. Avoid the common pre-optimizations that are in the whole destructive to virtually every dimension of a solution on modern hardware (destroying write performance, long-term read performance, economics, maintainability, reliability). Validate assumptions.

Because almost everything written about databases, and from that much of what you read, is perilously outdated. This post was inspired when seeing another database best practices guideline make the rounds, most suggestions circling a very dated notion that every effort should be made to reduce IOs, the net result being an obsolete, overwrought solution out of the gates.

1 – One of the most trying aspects of writing technical blog entries is people who counter with edge cases to justify positions: Yes, Walmart has a lot of writes. So does Amazon, CERN and Google too. The NY Taxi Commission logs loads of data, being in a city area of tens of millions.

There are many extremely large databases with very specialized needs. They don’t legitimize the choices you make, and they shouldn’t drive your technical needs.

The Dead End of The Social Pyramid Scheme

This is a random observational post while I take a break from real work (you’ll hear about that in a big way shortly). I’m revisiting a topic that I touched upon before, and ultimately this is really just a lazy rewriting of that piece.

A few days ago I saw a new commercial for Toronto’s SickKids hospital.

The commercial is powerful.

“This is new and fresh and important, so I’ll share it with the people I know on Facebook”, I thought.

It isn’t original content, obviously, but I thought it was something they’d find interesting.

So I shared it. Seconds later I deleted the post.

I don’t post on Facebook (or Google+, or Twitter) outside of the rare photo of the kids limited to family. By deleting I was returning to my norm.

Most of the people among my contacts have trended toward the same behavior, with a small handful of social feeders alone among the whole. Most now use Facebook for discussion groups and as a feed aggregator: If a site (e.g. Anandtech) shares on Facebook, I just rely upon it appearing in my feed rather than visiting their site. It’s also a great feed for game day news as well.

Individual sharing is trending way down on Facebook. Many other sites are showing the same trend.

We have like, share and retweet fatigue. It sits there as a little judgy footer on every post, each reaction carefully meted out and considered. As a social obligation both on our own posts, and on the posts of our friends and family.

So if I post something and it sits un-liked, should I be offended? Should I fish for likes, building a social crew? If my niece posts something interesting, should I like it or is that weird that I’m her uncle liking her post? If a former female coworker posts an interesting recipe, should I like it or is that going to be perceived as an advance?

If I get a pity like from a relative, should I reciprocate?

Some will dismiss this as overthinking, but what I’m describing above is exactly what this service, and every one like it, is designed to demand as your response. It is the gamification of users, used masterfully, and the premise is that if you make the social counts front and center, it obligates users towards building those numbers up. Some shared blog platforms are now plying this tactic to entice users to become essentially door to door pitchmen to draw people to the platform (as they sharecrop on someone else’s land, repeating a foolish mistake we learned not to make well over a decade ago), lest their blog posts get deranked. People aren’t pitching Avon or Amway now, they’re trying to get you to help them make a foundation for their medium blog or pinterest board or Facebook business group or LinkedIn profile or…

Sometimes it works for a while as a sort of social pyramid scheme. Eventually the base starts to stagnant, the “incentives” lose their luster if not rusting and becoming a disincentive for newer or more casual users. If it isn’t carefully managed, the new users will cast the old guard as obsolete and irrelevant.

I made a Soundcloud account purely to access a personal audio recording across multiple devices, so why do I keep getting notifications of spammy followers, all of whom are front and center on my profile that I don’t want? I don’t want followers or hearts or likes or shares.

Let me qualify that statement a bit: I love when readers think that these blog posts are interesting enough to share on various venues, growing the circle of exposure. That happens organically when readers thinks content is worthwhile, and it’s very cool. But that is something that the reader owns, and doesn’t sit as a social signal of relevance on this page: There are no social toolbars or tags on this post trying to act as a social proof that this is worth reading, beyond that most of you have read these missives for a while and I assume find some value in them.

Users should absolutely have these curating inputs (training the system on the things that they like and dislike), and the feed should of course adapt to the things the user actually enjoys seeing: If zero users find anything interesting that I post, zero people should see it. But by making it a public statement it becomes much more than that, losing its purpose and carrying a significant social obligation and stigma that is unwanted.

Virtually every social site follows the same curve as we all dig the social well, and when it runs dry we simply chase the next experience. Facebook has done well by pivoting the use of the service, but other services (Flickr, Twitter, and others) that attempted the same strategy peaked and then hit a period of stark decline: if someone with less than 100 twitter followers are perceived as “angry and disenfranchised”, new users find more benefits simply waiting out this generation, or moving to something new — a sort of ground zero where everyone goes back to the beginning again — than to try to gain some namespace among established users.

Back in the early days of the Internet, MUDs (definition varies) saw the same curve. Each instance would start as fresh greenfield full of opportunity and excitement. As the initial euphoria settled, soon it was a small set of regular users, maxed out in every regard. Now that the pyramid scheme of fresh meat was exhausted — new users knew that there was little fun or benefit to be had, and went to newer, fresher sites, leaving the existing users with their blessed armor and no skulls to smash — malaise set in. Eventually the universe was wiped and begun anew.

There’s no real aha or “what to do” out of this. I don’t know what to make of it. Clearly the tactic is fantastically successful in the ascent part of the curve, and has been leveraged masterfully by a number of sites, but if you don’t pivot at the right time it ends up turning a site into Slashdot or Kuro5hin — a decayed remnants of a yesteryear internet.

Where Do You Find Your Zen?

We’re All Buddhists Here

Most software developers are Buddhism idealists (in parallel with any other theistic or atheistic beliefs or traditions they may have).


I don’t mean the four noble truths, reincarnation, or any of the theological or even philosophical underpinnings of the dharma, but rather that we like the idea of meditation and zen.

We aren’t trying to achieve a state of mindfulness or being at one with your breathing or heightened sense of self. Instead most are seeking nothing more than “thinking without distraction for a short while”.

Committed, focused thought is remarkably hard to achieve when we’re a click away from Facebook and Reddit and Hacker News and learning how to create a library in Rust and fixing that minor bug we just remembered that suddenly is a shameful pox on our very existence

The ability to actually think, with focus and dedication, for any period of time is an extremely rare event for most of us. If you try to force yourself into it, the gnawing distraction of all of the things we could and should be doing clouds any attempts at thought.

Take a moment and clear your mind and think with clarity and purpose (tough, right?): Where do you find your zen? Where do you actually spend more than a fleeting moment thinking about anything?

The Thinking Hour

My moment of Zen used to be during the commute. Driving took so little mental effort, the routine so robotic, that the drive saw me processing through personal and professional relationships, project quagmires and technical complexities, opportunities, life plans, etc. It brought a certain clarity to the day, and gave actions a sense of planned purpose that otherwise was missing.

I could only achieve this effect if I was driving, by myself, and the commute was long enough. Add a passenger, or make me the passenger (including on public transit or conceptually a self-driving car), and instantly the options of distraction, even if purposefully shunned, eliminated all clarity of thought benefits.

It had to be an exercise that took long enough, where distractions weren’t possible and where some minimum level of focus was required. If I could read email and respond to texts during the drive — if it weren’t irresponsible and dangerous for other people on the road, say if my car were self-driving — it would have ruined it.

The radio morning shows were terrible, and I’ve yet to hear a podcast that isn’t ten seconds of content fluffed up to sixty minutes, so I often drove with just some classical music on CBC Radio 2 playing quietly in the background.

I hated the time wasted commuting, and the guilt about the environmental consequences, but I always enjoyed the period of thought. The concept of spending that time being angry listening to sports radio (Ebron did not commit an OPI) or an audiobook sounded terrible to me.

Then I started working at home and lost the benefits of the commute. I tried to find surrogates by forcing myself, but laying in a hammock, in a warm bath, etc, always ended up being an exercise in focusing on things I should be doing instead. It was futile.

I, like probably all of you, poured over Tricycle articles on meditation, deep thought, and so on, to no avail. All of the singing bowls and gongs couldn’t relieve my brain.

Unintentional Zen

We have a very large lawn and a long driveway. Mowing the lawn is about an hour long exercise on a riding lawn tractor. I put on the ear protection, fire it up, and for the next hour I’m Hank Hill driving in concentric squares. When the winter rolls around I’m pushing a snowblower 180 feet down a lane, back and forth and back and forth, followed by shoveling accessory areas.

These were my zen. I didn’t realize it, or their importance, at the time, but I did know that I liked doing them. That I always finished the exercise feeling relaxed and relieved.

Occasionally I’d try listening to music during the process, however the feeling that I needed to be alert for screaming voices as I operated dangerous equipment had me revert to nothing more than ear protection.

I hadn’t realized just how important this was to my mental well being until this summer rolled around. We had an extended drought, and for a good three months there was barely a dribble of rain.

The grass went into hibernation. Mowing wasn’t necessary.

I had no Zen. Stress levels rose. The sense that I was operating without a plan increased. A panic of time flooding away rose. Months passed.

But the rains returned (spoken as Morgan Freeman). The grass grew again.

On my first outing back on the (15) horse(power) it hit me like a tree branch in the face, the relief as pent up considerations were processed and prioritized was enormous. I was thinking through family considerations, personal projects, considering career moves and options, etc.

I hadn’t done this in literally months, and the sense of purpose with direction was overwhelming.

This was my zen. It was a period of time where I was essentially captive with no options for distraction, and where I didn’t have to focus on social niceties or with any deep concentration on the physical activity. It was the only time during an entire week when a thought continued for more than a few seconds. I’ve briefly achieved something similar before while cooking (during time intensive periods where focus and attentiveness was required, but complexity is minimal), and even in online first person shooters where my play is essentially autonomous.

I realized just how critically important this is to my progress and well being.

Disconnected Manual Labour

There is a glorious segment in the third season of House of Cards where some Tibetan Buddhists are creating a mandala. A mandala is a sand or coloured stone paint-by-numbers where you use a chak-pur as the implement.

It’s a beautiful practice, and one of the most appealing aspects of the exercise is that it’s then destroyed (sometimes prematurely), treated as a philosophical (if not mystical) representation of the transitory state of life. It isn’t kept as a fingerprint or ego exercise to shackle the future.

I imagine that being involved with creating a mandala, at least after you’ve achieved the basic skills of performing the task and using the chak-pur, is much like mowing the lawn: A time of just the right amount of focus (neither too much or too little, the chak-pur slowing the process enough that it isn’t just shaping some sand into an area) to have the ability to really think. It’s something I’ve always wanted to do.

I find the same meditative benefit to other manual tasks. Chopping firewood, for instance. Long hikes on trails I already know. When I was a teen I would get up before the sun rose and ride my bike 20km to a beach, and then home again. I’ve always imagined this is the draw for people who run regularly, using it as a period of thought and contemplation.

Knitting and other tasks, once some level of competence is achieved, must fulfill the same purpose as well.

Seven Tips For A Better You

So here’s where I provide the easy solutions and trite pablum to make it seem like I’ve soundly wrapped everything up and made you better person for having read this.

I’m not going to do that. Instead I offer up that you should consider your own hobbies and activities, and determine what your thinking time is, and whether you’re robbing yourself of it.

And if you don’t have one, pick up some sort of hobby or pursuit to provide it (there’s a whole potential business domain around this, as an aside. There are many people who would pay for the privilege of doing manual labor just to give them a purposeful reason to do something and retain the mental capacity for deep thought). I’ve worked in several offices with “quiet thinking areas”, but no one ever actually used them to think (they universally became “make cell phone call” areas), and even if people tried, for most simply having no distractions does nothing to aid focus and might actually impede it.

Sitting with your eyes closed simply doesn’t work for most of us.

EDIT: A timely post appeared on Wired today – What Gives With So Many Hard Scientists Being Hard-Core Endurance Runners?  And to avoid the appearance of following a herd, my post went up at 5:37am (the dog woke me early), while their’s went up at 7am.

Why We Program / The Love of the Craft

I’ve been a professional software developer for 20 years.

I’ve built embedded software, beginning with a TSR IRQ-driven remote management solution for a single-board computer (to allow us to securely control it on-demand over a modem), to a QNX-based RTOS monitoring and control platform. I’ve designed and constructed data processing and aggregation solutions as Windows services, Linux daemons, and duct-taped batch processes. I’ve built Win32  and Win64 applications (native and managed), DCOM/COM/COM+ tiered solutions — remember when n-tier was the price of entry to professionalism? — CORBA and microservices. I’ve been a Microsoft guy, a Delphi guy, a C# guy, a database guy, a Linux guy, a security guy, a web application guy, and a high performance big data processing guy. I’ve even been a domainologist.

I’ve built mobile applications and mobile targeting platforms on iOS and Android, and Windows CE years ago.

I’ve slung considerable C, C++, (Object) Pascal, Go, Java, and C# professionally. I’ve toyed with countless others. I’ve designed and managed enormous systems and databases, including for a couple of Canada’s biggest corporations.

I’ve been involved with a variety of styles of teams, in various levels of structure and rigidity. In a banking group with absolutely draconian process and hierarchy. In an agile financial upstart essentially rolling with whatever fits, gnashing around to find a process that works. In a telecom company somewhere in between.

I’ve been a team leader, a vice president of a medium sized organization, a software architect, a “junior” and “senior” software developer (hilariously I got the latter title about two years into this profession, denoting the ridiculously shallow career path many firms have in the hands-on realm).

None of this is a brag, given that it is hardly brag-worthy: I have no silicon valley experience, being the sort that generally stays within some radius of his birthplace (currently about 200km). I have no starred github projects. I have never worked on shrinkwrap software, or a triple-A game. I don’t have an inbox full of recruiter solicitations. I’ve never written a book (though I have authored magazine articles, for what that’s worth).

The vast majority of my work has been for companies around the edge, and for a lot of my career I have been “a big fish in a little pond”. My solutions are critically important for the people I work for, but not that important for society at large.

None of that was intending to be aggrandizing, and there are countless far more impressive developers in this field.

Instead the purpose of those paragraphs is to say that I’ve had a lot of varied experiences in this business, from starting as a junior tasked with doing the grunt work (where being a lazy worker I automated a manual process into a solution that grew into substantial business for the company), to guiding the implementation and technology for a pretty large team, including laying the blueprints in code.

Over my career I’ve had many paths out of programming. Options to move to pure management, or to architecture management (which unfortunately doesn’t include much actual software architecture). Even a CTO offer for a mid-sized company, albeit one that used technology in a periphery sense, and where it mostly meant doing boring vendor comparisons and meet and greets.

I turned them all down. I remained a programmer, or in a mixed position where my day to day still included at least 50% hands on. During periods this meant going solo and pitching consulting to all takers, particularly when our children were young and the demands of the family were many, essentially taking a sabbatical with occasional engagements to pay the bills.

I of course weighed the pros and cons, and on the pro side of moving more to management is that the requirements are so much more fuzzy and abstract, versus the technical paths where many unenlightened shops are asking for a laundry list of very specific technologies.

I love solving problems. I love the craft that we ply. The raw, voice-squeaking joy when a rashly implemented solution using some cool (ergo fun) new language or framework or technology actual works and solves some problem remains a reality of my world. I couldn’t stand moving too far from it. My eldest son is now going down the same path, and from day to day he has gone from Unity and C# to Java, nodejs, and most recently Python, enjoying the challenge and excitement of being exposed to new ideas and patterns.

I didn’t and don’t want to move to pure management.

To a lot of people, this is hard to understand. One of the root causes of ageism in this field, I suspect, is that a lot of people really don’t or didn’t like doing it: If it feels like a burdensome chore, then why in the world would someone want to do it when they’re further in their career? I remember being in my 20s and having these discussions with full of themselves peers who were sure that they would be managers by 30, VPs by 40, CTOs by 50, and so on, following the traditional path of the 1950s man. There was a prevailing notion, and it’s still evident, that any who haven’t ascended in such a manner had obviously failed during the climb.

It is the manual labor mentality applied inappropriately to a field of intellectual excellence. You start in the mail room, and then…

An architect is still an architect. A doctor is still a doctor. A researcher still a researcher. An artist is still an artist and a musician is still a musician.

So why then do programmers have to ascend into managers?


Bokeh and Your Smartphone – Why It’s Tough To Achieve Shallow Depths of Field

I’m going outside of the normal dogma today to focus on a field that I am a bit of a hobbyist in: I’m sort of a photography nerd, primarily interested in the fascinating balance-of-compromises that are optics, so I’m in my domain a bit with this piece on photography and depth of field. There are many great articles on depth of field out there, but in this I’m primarily focused on the depth of field issue of smartphones, and the often futile quest for bokeh.

Bokeh is the photography effect where background lights and details are diffuse and out of focus, as seen in these Flickr photos. Often it’s contrasted against a sharply focused foreground subject, providing an aesthetically pleasing, non-distracting backdrop.

To photographers this has always been called a shallow depth of field: with a large aperture and a longer focal length it is the normal technique to isolate a subject from the background, and is the mainstay of photography. The term “bokeh” has taken root among many courtesy of a late-90s photography how-to magazine article, so you’ll come across it frequently. Some purists hold it to only talk specifically about blurred light patterns, while in general parlance it just means “out of focus background”.

The best known mechanism to control the depth of field on a given piece of imaging hardware is the aperture (aka the f-stops) and distance to subject (the closer to the lens, the shorter the depth of field), each new device seemingly offering a wider aperture to enhance the options.

The Nexus 6p has an f/2 capable camera. The iPhone 6S offers f/2.2 29mm equivalent, and the new iPhone 7 pushes new boundaries with an f/1.8 lens (28mm equivalent, with a second 56mm equivalent on the 7+).

The ultimate portrait lens in the 35mm world is an 85mm f/1.4 lens.

On a traditional SLR-type camera, f/1.8 is a very wide aperture (aside – a “small” aperture has a larger number, e.g. f/22, while a wide or “large” aperture has a smaller number, such as f/1.8). If the scene isn’t too bright, or you have some neutral filters handy, it is gravy for a dish full of bokeh.

By now you’ve probably learned that it’s really hard to achieve shallow depths of field with your smartphone unless the subject is unreasonably close to the device (e.g. fish eye distortion of someone’s face), despite those seemingly wide apertures: Most everything is always in focus, so while there isn’t the once endemic problem of the slightly out of focus shots (being sort of close in focus is often good enough), it makes the cool effects tough to achieve. Instead blurry smartphone photography is primarily caused by too low of a shutter speed coupled with moving subjects or a shaky photographer.

But why is that? Why does your f/2 smartphone yield such massive depths of field, making bokeh so difficult? Why isn’t f/2 = f/2? If you’re coming from the SLR world and install some great manual control photography app on your smartphone, you likely found yourself disappointed that your f/2 smartphone isn’t delivering what you’re accustomed to elsewhere.

Because of the in f/2. While it is treated like a abstract value holder, it literally means “focal length / 2”.

And the focal length on smartphones is why you are separated from your bokeh dreams. While my Nexus 6p has a 28mm equivalent (compared to the 35mm camera benchmark) focal length, it’s actually a 4.67mm focal length. Courtesy of the physics of depth of field, its focal length means an f/2 on this device is equivalent to about an f/10 DoF on a 35mm lens when the subject is at the same distance from the lens. The iPhone 6 has a focal length of 4.15mm, while the iPhone 7 offers up lenses of apparently 3.99mm and 7.7mm.

This is easy enough to prove. Here’s an f/2 photo on my Nexus 6p. The subject is about 30cm from the lens.


Now here’s approximately the same scene, at the same distance, with a zoom lens set at ~28mm on a Canon T2i (approximating the zoom level of the Nexus 6p fixed focal length), the aperture set to f/10.

Canon T2i f/10 @ ~28mm

While each device has its own post-processing (the T2i in this case is set to neutral, while the 6p, like most smartphones, is fairly heavy handed with contrast and saturation), if anything the SLR features as much or more blurring, despite a significantly smaller aperture.

This is the impact of the focal length on the depth of field. Here the same subject shot from the same distance, the zoom lens set to 55mm, the aperture still at f/10. The depth of field collapses further (it isn’t just a crop of the above picture, but instead the DoF shrinks further).

Canon T2i @ 55mm f/10

And for comparison here it is at f/5.6-

T2i - 55mm f/36

So why is this?

First let’s talk about something called the circle of confusion (CoC) to get it out of the way as a parameter of the calculator following. In this discussion the CoC is the amount that a “focused” photon can stray outside of the idealized target before it leads to blur for a given sensor. There are many, many tables of static CoC values, and a lot are very subjective measures (e.g. “if you print this as an 8×10 photo and view it from 3 feet away, what amount of blur is indiscernible). For my calculations I am calculating the CoC as 2 x the pixel stride of the target sensor (via the nyquist theory), but you can use a table or your own mix of crazy as the CoC. I leave that open.

The Nexus 6p has a sensor that is 6.324mm wide, containing 4080 pixels per line (not all pixels are active, so this was measured via the SDK). So a pixel stride of 0.00152794mm, and doubling that we get 0.0030558. That is the CoC I’m using for the Nexus 6p.

We know the focal length (4.67mm), and we know the desired CoC (0.0030558), so let’s calculate something called the hyperfocal distance.

The hyperfocal distance is the focus distance where everything to infinity, and to approximately 1/2 the focus distance, will also be in effectively perfect focus for a given aperture. It is a very important number when calculating DoF, and the further the hyperfocal distance, the shallower the DoF will be for closer subjects.


Focal length (mm)
CoC (mm)
Hyperfocal Distance (mm)


Now we know that the hyperfocal distance is No JavaScript? meters for these parameters, and if you change the f-stop, the focal length, or the CoC, the hyperfocal distance will recalculate accordingly. What that means is if a focused subject is that distance from the lens, at those settings, the furthest distances (the mountains on the horizon, the stars in the sky, etc) will still be completely in focus, as will everything from about half the distance before the focus distance as well. It is the hyper-of-focuses, and is a critical number for landscape photographers.

Focusing beyond the hyperfocal distance does nothing to improve distant focus for this CoC, but instead simply unfocuses closer objects. Once again I have to note that CoC is not a fixed constant, and if you had a sensor with 4x the pixels, the CoC by my method would halve and the focus would need to be more precise. Others would argue, with good reason, that the CoC should be percentage of the total span such that the same effect amount is seen across devices, while my measure is achieving technical perfection for a given device.

The hyperfocal distance is the basis of the calculations that allow us to calculate the near and far of the depth of field. Let’s calculate the DoF for a given focus distance. Note that these values are in millimeters, as most things are in the photography world (so instead of 10 feet, enter 3048).

Subject distance (mm)
Near Depth of Field (mm)
Far Depth of Field (mm)

Beyond the near and far depth of field, of course the defocus increases as a multiple of the distance.

If you entered a focal length of 4.67, a CoC of 0.0031, an f-setting of 2.0, and a subject distance of 300 (30cm — the distance in the above picture), the near and far depth of field would calculate to about 276.454 mm to 327.931 mm, meaning everything within that distance from the camera should be focused perfectly on the device, and the further out of those ranges the more defocus is evident. Altering those values for the SLR, with a focal length of 28, a CoC of 0.0086 (the SLR has a much larger sensor), and an f-setting of 10.0, with the same subject distance of 300, yields a smaller depth of field of 290mm to 310mm. A significantly smaller aperture, yet an increased amount of bokeh at a given distance.

All f-stops are not created equal, which is why Apple is artificially simulating bokeh on their newest device (as have other vendors). Your f/1.8 smartphone might provide light advantages, but don’t expect the traditional depth of field flexibility. On the upside, this is the reason why almost all smartphone photography is sharp and in focus.

I love using my smartphone to take photos (or stabilized videos): It is the camera that is always with me, and an enormous percentage of the shots turn out great. When I’m looking for defocus effects I reach for the SLR, however.

h.265 (HEVC) Encoding and Decoding on Android

I periodically give some attention Gallus1 (an awesome in every way stabilized hyperlapse app for Android, doing what many said was impossible), tonight enabling h.265 (HEVC) support for devices surfacing hardware encoding/decoding for that codec, still packaging it in a traditional MP4 container.

The Nexus 6p, for instance, has hardware HEVC encoding/decoding via the Snapdragon 810 (one of the benefits over the 808), however it was inaccessible for third-party developer use until the Android N (7.0) release. I had done trials of it through some of the 7.0 betas, however until recently it was seriously unstable. With the final release it seems pretty great.

h.265 is a pretty significant improvement over the h.264 (AVC) codec that we all know and love, promising about the same quality at half the bitrate. Alternately, much better quality at the same bitrate. It also features artifacts that are arguably less jarring when the compression is aggressive or breaks down. And on a small, mobile processor it’s encoded efficiently in real time at up to 4K resolutions.

One aspect of Android development that might surprise many is just how fragile some of the subsystems of the platform are. If you fail to utilize the camera API in specific patterns and sequence (I am not talking about documented behaviors, but rather rudely discovered quirks of operation, where some sequences of events require literal imperative pauses, discovered through trial and error, to avoid defects), the entire camera subsystem will crash and be unrecoverable by any means other than a full restart. The same is true of the h.265 encoder and decoder on the Nexus 6p, and in implementing the new functionality I had to restart the device a dozen+ times to recover from subsystem failures as I massaged the existing code to yield good behavior from the new codecs. Ultimately I find Android to be an incredible, amazing platform, but it remains surprising that so much of it is perilous native code houses of cards with a very thin API.

1 – I’ve never paid much attention to the Play Store listing, and it still features screenshots from a much the primitive (in UI, not in algorithmic awesomeness) initial version, and I’ve never made a tutorial or real demonstration (it is absolutely incredible on the Nexus 6p). It always seemed like there was one more thing I wanted to fix before I really made a deal out of it. But it is pretty awesome, and I finally started committing serious time to finishing what I think are the “worth it” features that put it over the top. By the end of October.

And to answer a question that no one has asked (okay, one or two have asked it, including recently a major manufacturer who I’m trying to entice into committing), yes, Gallus (the code, technology, git history, and optionally my participation in getting you to success with it) for a very reasonable price.

What is a reasonable price? Well, a $800,000 CAD (today ~$610K USD), fully secured loan at the bank of Canada nightly rate with a final balloon payment of principal and interest (and no penalty for prepayment, and no payments in the interim) in three years would be satisfactory (though if you want additional help there would be normal consulting rates).

I need to get off the crazy endless bills bicycle for a while to explore some opportunities. It’s a completely atypical situation, and a remarkable opportunity for all.

Strive For Technical Debt

Technical debt is the differential between the solution you have in hand, and the idealized hindsight-refined vision of what it could be: If there’s a solution solving problems, there are people telling any who’ll listen about all of the mistakes made in its implementation.


This debt is lamented on programming boards everywhere as the folly of those who came before. Not only as the cheap critique of others, but we often criticize our own implementations when we had to take shortcuts due to time constraints, help was unavailable, we didn’t entirely understand the platform or the requirements, tooling was limited, etc.

Pragmatic developers — or the ones with the actual task of actually creating solutions — view it as a “how the sausage is made” kind of reality, while others view it as a mistake that shouldn’t have happened, and with proper care and control we could learn and avoid it in the future. In the imaginary view of projects, we’d ideally build with the planning, time, knowledge and resources to never have technical debt in the first place.

But that’s nonsense the world over, across almost every strata of projects. Code starts ugly, universally. It’s a basic truth of the industry.

There are no exceptions. If you live under delusions that your current solution isn’t ugly, be sure to revisit this assessment in a year. And when a project really tries to circumvent this, it invariably ends up in analysis paralysis, nothing of consequence generated after person-decades of development.

And this idealized hindsight suffers from a severe survivorship bias. The idealized “make-it-right-from-the-outset” projects that all failed and fell by the wayside years before get forgotten, essentially lost in time, but instead we remember and study the process leading to the clunky duct-tape project that has run the organization for the past decade, hanging around long enough to be resented and belittled.

Technical debt almost always comes with success. It comes with a project becoming important enough that it matters, drawing attention and considerations and criticisms. It’s exactly those projects that are good enough that we find it hard to rationalize replacements (this is where dogma diverges from reality), and when we do the second-system effect is in full effect, our efforts to justify new projects leading us to overwrought, overly ambitious efforts.

Celebrate if you have technical debt to complain about. It usually means you’re sitting on something successful enough to matter.

Remember When The Suburbs Were Doomed?

Software-development focused online communities skew male, and generally younger (e.g. 20s to mid 30s). Most live in dense urban areas (the Bay, Seattle, NYC, London, etc), often in smallish apartments and condos. Few have families.

As is a side effect of the human condition, there is a tendency to cast one’s own lot as either the inevitable situation for most (e.g. people who have it different from me are living on borrowed time), or as if one’s personal situation is a more righteous, principled choice (better for the planet, development, futurology, etc. This is an observation I personally learned seeing myself do exactly these rationalizations over time).

Stories declaring or predicting the end of the suburbs always do well on these forums. I’ve seen poorly researched stories predicting this topping sites like Hacker News again and again: It tells the audience what they want to hear, so the burden of proof disappears.

But this assumption that suburbs are doomed has always struck me as counter-intuitive. While there has been a significant migration from rural areas to urban areas in virtually every country of the world (largely as rural workers are effectively displaced by factory farming or obsolete skillsets and it becomes a basic survival pattern), suburbs seem to be as bulging as they’ve ever been. In the United States, for instance, the fastest growing areas of the country have been medium density (e.g. suburbs and small cities). Very high density areas have actually been dropping much faster than even rural areas have.

The suburbs aren’t actually dying.

But maybe they will soon? The argument that they will is completely predicated on a single thing happening: We’re going to run short on oil, transportation costs are going to explode, and suddenly the onerous costs of living in far flung places is going to cause mass migration to the city centers, everyone defaulting on their giant suburban McMansion mortgage, the rings turned into a desolate wasteland.

Increasingly it seems like we’re more likely to keep oil in the ground than to run out. Alternatives to this easy energy source are picking up pace at an accelerating rate. As electric vehicles hit the mainstream, they’re becoming significantly more economically viable options for high mileage drivers (fuel for electric cars costs somewhere in the range of 1/5th per mile compared to gasoline, even in high cost areas like California). Where the miles are much cheaper from some solar panels or a windmill than they are a gallon of gasoline, even at the current depressed prices. And that’s excluding the significant mechanical costs of internal combustion engines that would soon be dramatically undermined by mass produced electric vehicles.

You can go much further for less than ever before, with the specter of oil’s decline being less and less relevant. If anything transportation is going to get a lot cheaper.

Of course the commute itself has always been a tax on life, and personally I can say that I quit jobs after doing the grueling big city commute. Only we’re on the cusp of our car doing the driving for us. Very soon the drive will be quality time to catch an episode of a good show, or maybe a quick nap. The capacity of even existing roadways will dramatically increase once you remove human failure and foible.

Connectivity…well everyone everywhere is connected, incredibly inexpensively. When I was a kid we had to pay $0.30/minute+ to talk to people 20km away. Now continent wide calling is free. Internet connectivity is fast and copious almost anywhere. Many of us work remote jobs and it really doesn’t matter where we are.

We’re virtual.

I’m not making any predictions or judgments, but the inputs to the classic assumptions has changed enormously. I recently was entertaining the idea of living even more remote (right now I work at home in a rural area exurb of Toronto — this doesn’t qualify as even suburbs — but there are of course far more remote areas of this country), and it’s incredible how few of the factors are really compromised anymore: I’d still have 100s of channels, high speed internet, 24/7 essentially free communications (text, audio, video, soon enough 360 vision at some point with depth) with every family member, overnight delivery of Amazon Prime packages, etc.

Being in a less dense area just isn’t the compromise it once was. And that’s talking about fully rural areas.

The suburbs — where you still have big grocery stores and bowling alleys and neighborhood bars and all of the normal accouterments of living — just aren’t much of a compromise at all. When someone talks up the death of the suburbs, I marvel at the 1980s evaluations of the 2010s world. I would argue the contrary: a few communicable disease outbreaks (e.g. SARS v2) and humans will scurry from density.

Facebook Instant Articles

Both Google and Facebook introduced their own lightweight HTML subsets: AMP and Instant Articles, respectively. I mentioned AMP on here previously, and via an official WordPress plugin it’s accessible by simply appending /amp on any page’s URL. Both impose a restrictive environment that limit the scope of web technologies that you can use on your page, allowing for fewer/smaller downloads and less CPU churn.

The elevator pitch for Facebook’s Instant Articles is an absolutely monster, bringing an i5-4460 to its knees by the time the page had been scrolled down. There’s a bit of an irony in the pitch for a lightweight, fast subset of HTML being a monstrous, overwrought, beastly page (the irrelevant background video thing is an overused, battery sucking pig that was never useful and is just boorish, lazy bedazzling).

I’m a user of Facebook, with my primary use being news aggregation. As many content providers all herded in line to the platform, I stopped visiting their sites and simply do a cursory browse of Facebook periodically: BBC, CBC, The Star, Anandtech, The Verge, Polygon, Cracked, various NFL related things, and on and on. On the whole I would wager that these sites engaged in a bit of a Tragedy of the Commons racing to the Facebook fold, though at some point the critical mass was hit and it became necessary to continue getting eyeballs.

The web is ghetto-ized and controlled.

More recently Facebook started corralling mobile users to their own embedded browser (for the obvious overwatch benefits). And now they’re pushing publishers to Instant Articles.

But the transition isn’t clean. Many sites are losing most embedded content (Twitter embeds, social media). Lots of pages are pulling up, curiously, malformed XML errors. It is accidentally acting as an ad blocker on many web properties, the sites unintentionally squashing their own ads, filtered out as non-Instant Article compliant.

It’s interesting how quietly this is all happening. This once would make for pretty big tech news (especially Facebook embedding the browser). Now it’s just a quiet transition.