Technology and Population Density Trends

A bit of a rambling, conversational piece today.

Amara’s Law states-

We tend to overestimate the effect of a technology in the short run, and underestimate the effect in the long run.

We declare that a technology changes everything, realize that entrenched patterns/behaviors and small hangups limit broad adoption. We discount it as over-hyped and not as significant as expected. It quietly takes over and changes the very foundations of society and our lives.

Recently I was pondering what electric cars and self-driving cars would do to population density. The former — using mechanically simpler vehicles with a much less expensive energy source — will significantly reduce the cost of driving as it achieves the economy of scale, while the latter will reduce the inconvenience of driving, commuting in particular.

Self-driving cars will not only allow us to do other things during the ride, it will significantly increase the capacity of our roads to handle traffic by reducing human error and inefficiencies.

Intuitively, at least to me, these changes will propagate lower density living. That home that previously would have been an expensive, three-hour commute becomes a relaxing period to watch a Netflix series or catch up on emails.

Considering the probable social change of self-driving EVs led me to consider the changes over the past several decades. In Canada, as an example, lower density areas of the country — the Atlantic provinces, rural areas, small towns and villages — are hollowing out. The high density areas, such as the Golden Horseshoe around Toronto, is a magnetic draw for all of Canada and continues growing at a blistering pace.

Even if a home in the Toronto area costs 5x the price for a given set of amenities, and even if a hypothetical person might prefer lower density, many forces still draw them in.

Which is strange, in a way. I grew up in a small city and seemed to be completely isolated from the larger world. Calling a relative 20 minutes down the road necessitated long distance. My town had no computer store, a mediocre BBS, few channels on television, no radio station, etc. There were few resources for learning.

I was wide eyed at the options available in the big city.

Yet today we live in a world where that same small town has inexpensive 100Mbps internet, and can communicate with anyone over the globe in an instant. Where you can order just about anything and have it the next day, or even the same day. Every form of entertainment is available. Every resource and learning tool is a couple of clicks away (aside — education is one area that has yet to see the coming change from the new world). Few of the benefits of the density are missing.

But those same changes led to centralization, and a hollowing out of most of the better jobs, entailing the workforce having to follow.

We centralized government and administration, pulling the school boards and government offices, banks, etc, out of those small towns in the quest for efficiency, moving up the density ladder. Those five small villages amalgamated to a single board, that then got pulled into a larger board in the city an hour up the road, etc. Connectivity means that management for the few remaining auspices of structure can be at a far flung location.

Every medical specialty was moved to larger centers as the ownership of cars became prevalent, and long drives were accepted. Seeing a pediatrician 200km away becomes a simple norm. Service and even retailing gets centralized to some unknown place elsewhere on the globe.

Everything centralizes. Because it can.

Most decent jobs require a move to density. The same forces that gave the convenience of the city in far flung locations also relegated it to being essentially a retirement home.

Reconsidering the probable change of EVs and self-driving cars will likely accelerate that migration.

Updates: Pay Apps / Date-Changing Posts / Random Projects

Recently a couple of readers noticed that some posts seemed to be reposted with contemporary dates. The explanation might be broadly interesting, so here goes.

I host this site on Amazon’s AWS, as that’s where I’ve done a lot of professional work, I trust the platform, etc. It’s just a personal blog so I actually host it on spot instances — instances that are bid upon and can be terminated at any moment — and there was a dramatic spike late in the week on the pricing of c3 instances, exceeding my bid maximum. My instance was terminated with extreme prejudice. I still had the EBS volume, and could easily have replicated the data on the new instance for zero data loss (just a small period of unavailability), however I was just heading out so I just ramped up an AMI image that I’d previously saved, posted a couple of the lost posts from Google cache text, and let it be. Apologies.

Revisiting Gallus

Readers know I worked for a while on a speculative app called Gallus — a gyroscope-stabilized video solution with a long litany of additional features. Gallus came close to being sold as a complete technology twice, and was the source of an unending amount of stress.

Anyways, recently wanted a challenge of frame-v-frame image stabilization and achieved some fantastic results, motivated by my Galaxy S8 that features OIS (which it provides no developer accessible metrics upon), but given the short range of in-camera OIS it can yield an imperfect result. The idea with be a combination of EIS and OIS, and the result of that development benefits everything. I rolled it into Gallus to augment the existing gyroscope feature, coupling both for fantastic results (it gets rid of the odd gyro mistiming issue, but still has the benefit that it fully stabilizes with highly dynamic and complex scenes). Previously I pursued purely a big pop outcome — I only wanted a tech purchase, coming perilously close — but this time it’s much more sedate in my life and my hope is relaxed. Nonetheless it will return as a pay app, with a dramatically simplified and optimized API. I am considering restricting it only to devices I directly test on first hand. If there are zero or a dozen installs that’s fine, as it’s a much different approach and expectation.

Project with my Son

Another project approaching release is novelty app with my son, primarily to acclimate him to “team” working with git. Again expectations are amazingly low and it’s just for fun, but might make for the source of some content.

Social Anxiety and the Software Developer

A brief little diversionary piece that I hope will prove useful for someone out there, either in identifying their own situation, or in understanding it in others. This is a very selfish piece — me me me — but I hope the intent can be seen in a good light. I suspect that the software development field draws in a lot suffering social anxiety.

This piece is in the spirit of talking openly and honestly about mental health, which is something that we as a community and a society don’t do enough.

A couple of months ago I endured (and caused others to endure) a high stress event. I certainly haven’t tried to strike it from memory (the internet never forgets), and in many ways a lot of positives have come from it and it has been a profound period of personal growth since.

One positive is that I finally faced a lifelong burden of social anxiety, both pharmacologically and behaviorally, a big part being simply realizing that it was a significant problem. I know from emails to my previous mention of enduring this that it struck some readers as perplexing: I’ve worked in executive, lead, and senior positions at a number of organizations. I have a domain under my own name and put myself out there all the time1. I’m seemingly very self-confident, if not approaching arrogance at times.

That isn’t just a facade: I am very confident in my ability to face an intellectual or technical challenge and defeat it. In the right situation I am forceful with my perspective (not because it’s an opinion strongly held, but because I think it’s right, but will effortlessly abandon it when convinced otherwise).

Confidence isn’t a solution to social anxiety, however. It’s possible if not probable for them to live in excess alongside each other. In many ways I think an bloated ego is a prerequisite.

Many choices — as trivial as walking the dog — were made under the umbrella of avoiding interactions. Jobs were avoided if they had a multi-step recruitment process. Investments were shunned if they weren’t a singular solution to everything, and even then I would avoid the interactions necessary to get to a resolution.

I succeeded in career and personally entirely in spite of these handicaps, purely on the back of lucking into a skillset at a perfect time in history. I am utterly convinced that at any other time in history this would have been devastating to any success. Be good at something and people overlook a lot.

And it was normalized. One of the things about this reflective period is that suddenly many of the people who I know and love realized “Hey, that was pretty strange…” It seemed like a quirk or like being shy (which we often treat as a desirable trait), but in reality it was debilitating, and had been from my formative years.

There are treatments for it. I’m two months into this new perspective and I can say that the results are overwhelming. I will never be a gregarious extrovert, but life is so much less stressful just living without dreading encountering a neighbour, or getting a phone call, etc.

1 – The online existence is almost abstract to me, and I’ve always kept it that way. I have always dreaded people who I know in “real life” visiting this blog (sometimes family or coworkers have mentioned a piece and it has made me go silent for months, hoping to lose their interest), reading any article I’ve written or anything written about me, etc. That is too real, and was deeply uncomfortable to me. Nonetheless there have been times I’ve realized I said something in error and a cold sweat overcomes me, changing all plans to get to a workstation and fix the error.

CMOS and Rolling Shutter Artifacts / Double Precision Compute

EDIT (2017-03-01) – It’s been a bit quiet on here, but that will change soon as some hopefully compelling posts are finished. I’ve taken a couple of weeks to get into a daily fitness and relaxation routine (I would call it a meditation if that term wasn’t loaded with so much baggage), organize life better, etc. Then it’s back to 100% again with these new habits and behaviors.

While I finish up a more interesting CS-style post (SIMD in financial calculations across a variety of programming languages), just a couple of interesting news items I thought worth sharing.

In a prior entry on optical versus electronic image stabilization I noted rolling shutter artifacts –an image distortion where moving subjects or the entire frame during motion can be skewed and distorted — and their negative impact on electronic stabilization.

During video capture, especially under less than ideal conditions, it is a significant cause of distortion that often goes unnoticed until you stabilize the frames.

Sony announced a CMOS sensor with a data buffering layer that allows it to have something approximating a global shutter (Canon previously announced something similar). While their press release focuses on moving subjects in stabilized type situations, the same benefit dramatically reduces the rolling shutter skew during motion of video capture. It also offers some high speed capture options which is enticing.

Sony sensors are used by almost every mobile device now, so it’ll likely see very rapid adoption across many vendors.

EDIT: Sony is already showing off a device with a memory-layer equipped CMOS, so it’s going to become prevalent quickly.

Another bit of compelling news for the week is the upcoming release of the GP100, Pascal-based workstation GPU/compute device by nvidia (they previously released the P100 server based devices).

Double-precision calculations of upwards of 5 TeraFLOPS (for comparison a 72-core/AVX-512 Knights Landing Xeon Phi 7290 offers about 3.4 TeraFLOPS of DP performance, while any traditional processor will be some two magnitudes lower even when leveraging full SIMD such as AVX2). Traditionally these workstation cards massively compromised double precision calculations, so this update brings it into much greater utility for a wide array of uses (notably the financial and scientific world where the significant digits of single precision made it unworkable).

Things You Probably Don’t Know About Your Digital Camera

Another periphery post about imaging as a recent project brought it into my thoughts. It’s always nice to have broadly interesting, non-confidential material to talk about on here, and while this is well known to many, I’ve discovered in some projects that many in this field aren’t aware of this magic happening in their devices.

To get right to it.

Your Camera’s Imaging Resolution is Lower Than Advertised

The term “pixel” (picture element) is generally holistic — a discretely controlled illumination of any visible shade of color, whatever the mechanism of achieving that result. Your 1080p television or computer monitor has ~1920 by 1080 pixels, or about 2 million “pixels”. If you look closely, however, your monitor has discrete subpixels for the emissive colors (generally Red-Green-Blue, though there is a display technology that adds yellow as well for an RGBY subpixel arrangement). These might be horizontally oriented, vertical, staggered, in a triangle pattern…in any order.

At appropriate combinations of resolution density and viewing distance, your eyes naturally demosaic, blending the individual colors into a discrete full color representation.

The vast majority of digital cameras, however, use the term pixel in a very different way.

To visually explain, here’s a portion of RAW sensor data taken directly from a Nexus 6p. The only processing applied was source color channel gains and scaling from the original 100 by 63 to a more visible 600 by 378.

car_bayer_closer

In the digital camera world, each of these discrete colors is a whole pixel.

If you inspect the pixels you’ll notice that they aren’t full color (though that is obvious just by eyeballing the image). Each unique location is one of either red, green, or blue, at varying intensities. There are no mixes of the colors.

The imaging sensor has pits that can measure photons, but they have no awareness of wavelength. To facilitate color measurements a physical color filter is overlaid over each pit, alternating between the three colors. This is generally a Bayer color filter.

There is another type of sensor that layers wavelength sensitive silicon (much like the layers of classic film), capturing full color at each site, however it is very rarely used and has its own problems.

Green is most prevalent, comprising 50% of the pixels given that it’s the color band where the human eye is most sensitive to intensity changes and detail. Red alternates with green on one line, while Blue alternates with green on the next.

The functional resolution of detailed color information, particularly in the red and blue domains, is much lower than many believe (because of which many devices have physical and processing steps — e.g. anti-aliasing — that further reduce the usable resolution, blurring away the defects).

The Nexus 6P ostensibly has a 4032 x 3024 imaging resolution, but really, courtesy of the Bayer filter, has a 2016 x 3024 green resolution, a 2016 x 1512 blue resolution, and a 2016 x 1512 red resolution. For fine hue details the resolution can be 1/4 expectations, and this is why fully zoomed in pictures are often somewhat disappointing (also courtesy of processing and filtering to try to mask the color channel information deficiencies).

Your Camera’s Imaging Sensor Has Many Defects

Due to defects in silicon, the application of the physical bayer filter, and electrical gain noise, many of the photo sites on your digital sensor are defective.

Some read nothing, while many more see ghosts, reporting some or significant false readings. Readings of a constant brightness target will vary, sometimes significantly, across pixels (yielding a grainy, noisy output image).

falsereadings

This is a random 150 pixel wide reading from the 6p when taking a 1/10s picture of pure darkness. These defective readings cover the entire capture in varying densities, comprising up to hundreds of false data points. Most are permanent, often with new ones appearing as the device ages. Some defects temporarily worsen when the sensor is warm. Most SLRs have a special mode where it will take a full darkness picture and then catalog and remove all hot pixels from the output material. Android also has the notion of remembering hot pixels.

This is the case with every digital sensor, from your smartphone to your high end SLR. I remember being somewhat horrified first looking at a wholly unprocessed RAW image from my SLR, seeing hundreds of fully lit pixels scattered across the image.

Algorithms Saves The Day

The solution to all of these problems is processing, but it does have consequences.

Hot pixels are eliminated both through prior knowledge (a hot pixel database for a given sensor), and through simply eliminating pixels that shine a little too bright relative to her neighbors. They get replaced with an interpolated average of neighbors.

The Bayer pattern source is turned into a full color image via a demosaicing algorithm, and there is considerable academic research into finding the optimal solution. In that case I linked to an army research paper, the military having a significant interest in this field given the broad use of Bayer imaging sensors, and a need to know that the resulting images/data are the highest fidelity possible (especially given that machine vision systems are then analyzing that resulting heavily processed output, and with the wrong choices can be triggering on algorithm detritus and side-effects).

The choice of demosaicing algorithm can have a significant impact on the quality of the resulting image. Do you know what algo your device is using?

After demosaicing, color corrections are applied (both to move between color spaces, and to provide white point corrections), and then the image is de-noised — those fine grainy variations are homogenized (which can yield unique results if the subject itself has a grainy appearance — the algorithm can’t discern whether variations are from the source or from the sensor).

The resulting image is generally close to perceptually perfect, but an enormous amount of human knowledge and guesswork went into turning some very imperfect source data into a good result. The quality of an image from a digital device is as significantly impacted by software as the hardware (many devices have terrible color fringing courtesy of poor demosaicing). Which is why many choose to shoot RAW photos, saving those source single-band pixels as is before destructively applying corrections. This allows for improvements or alterations of algorithms when the magic mix didn’t work quite right for a given photo.

If you look closely at the results, you start to see the minor compromises necessary to yield a workable output.