Where Do You Find Your Zen?

We’re All Buddhists Here

Most software developers are Buddhism idealists (in parallel with any other theistic or atheistic beliefs or traditions they may have).


I don’t mean the four noble truths, reincarnation, or any of the theological or even philosophical underpinnings of the dharma, but rather that we like the idea of meditation and zen.

We aren’t trying to achieve a state of mindfulness or being at one with your breathing or heightened sense of self. Instead most are seeking nothing more than “thinking without distraction for a short while”.

Committed, focused thought is remarkably hard to achieve when we’re a click away from Facebook and Reddit and Hacker News and learning how to create a library in Rust and fixing that minor bug we just remembered that suddenly is a shameful pox on our very existence

The ability to actually think, with focus and dedication, for any period of time is an extremely rare event for most of us. If you try to force yourself into it, the gnawing distraction of all of the things we could and should be doing clouds any attempts at thought.

Take a moment and clear your mind and think with clarity and purpose (tough, right?): Where do you find your zen? Where do you actually spend more than a fleeting moment thinking about anything?

The Thinking Hour

My moment of Zen used to be during the commute. Driving took so little mental effort, the routine so robotic, that the drive saw me processing through personal and professional relationships, project quagmires, opportunities, life plans, etc. It brought a certain clarity to the day, and gave actions a sense of planned purpose that otherwise was missing.

I could only achieve this effect if I was driving, by myself, and the commute was long enough. Add a passenger, or make me the passenger (including on public transit or conceptually a self-driving car), and instantly the options of distraction, even if purposefully shunned, eliminated all clarity of thought benefits.

It had to be an exercise that took long enough, where distractions weren’t possible and where some minimum level of focus was required. If I could read email and respond to texts during the drive — if it weren’t irresponsible and dangerous for other people on the road, say if my car were self-driving — it would have ruined it.

The radio morning shows were terrible, and I’ve yet to hear a podcast that isn’t ten seconds of content fluffed up to sixty minutes, so I often drove with just some classical music on CBC Radio 2 playing quietly in the background.

I hated the time wasted commuting, and the guilt about the environmental consequences, but I always enjoyed the period of thought. The concept of spending that time being angry listening to sports radio (Ebron did not commit an OPI) or an audiobook sounded terrible to me.

Then I started working at home and lost the benefits of the commute. I tried to find surrogates by forcing myself, but laying in a hammock, in a warm bath, etc, always ended up being an exercise in focusing on things I should be doing instead. It was futile.

I, like probably all of you, poured over Tricycle articles on meditation, deep thought, and so on, to no avail. All of the singing bowls and gongs couldn’t relieve my brain.

Unintentional Zen

We have a very large lawn and a long driveway. Mowing the lawn is about an hour long exercise on a riding lawn tractor. I put on the ear protection, fire it up, and for the next hour I’m Hank Hill driving in concentric squares. When the winter rolls around I’m pushing a snowblower 180 feet down a lane, back and forth and back and forth, followed by shoveling accessory areas.

These were my zen. I didn’t realize it, or their importance, at the time, but I did know that I liked doing them. That I always finished the exercise feeling relaxed and relieved.

Occasionally I’d try listening to music during the process, however the feeling that I needed to be alert for screaming voices as I operated dangerous equipment had me revert to nothing more than ear protection.

I hadn’t realized just how important this was to my mental well being until this summer rolled around. We had an extended drought, and for a good three months there was barely a dribble of rain.

The grass went into hibernation. Mowing wasn’t necessary.

I had no Zen. Stress levels rose. The sense that I was operating without a plan increased. A panic of time flooding away rose. Months passed.

But the rains returned (spoken as Morgan Freeman). The grass grew again.

On my first outing back on the (15) horse(power) it hit me like a tree branch in the face, the relief as pent up considerations were processed and prioritized was enormous. I was thinking through family considerations, personal projects, considering career moves and options, etc.

I hadn’t done this in literally months, and the sense of purpose with direction was overwhelming.

This was my zen. It was a period of time where I was essentially captive with no options for distraction, and where I didn’t have to focus on social niceties or with any deep concentration on the physical activity. It was the only time during an entire week when a thought continued for more than a few seconds. I’ve briefly achieved something similar before while cooking (during time intensive periods where focus and attentiveness was required, but complexity is minimal), and even in online first person shooters where my play is essentially autonomous.

I realized just how critically important this is to my progress and well being.

Disconnected Manual Labour

There is a glorious segment in the third season of House of Cards where some Tibetan Buddhists are creating a mandala. A mandala is a sand or coloured stone paint-by-numbers where you use a chak-pur as the implement.

It’s a beautiful practice, and one of the most appealing aspects of the exercise is that it’s then destroyed (sometimes prematurely), treated as a philosophical (if not mystical) representation of the transitory state of life. It isn’t kept as a fingerprint or ego exercise to shackle the future.

I imagine that being involved with creating a mandala, at least after you’ve achieved the basic skills of performing the task and using the chak-pur, is much like mowing the lawn: A time of just the right amount of focus (neither too much or too little, the chak-pur slowing the process enough that it isn’t just shaping some sand into an area) to have the ability to really think. It’s something I’ve always wanted to do.

I find the same meditative benefit to other manual tasks. Chopping firewood, for instance. Long hikes on trails I already know. When I was a teen I would get up before the sun rose and ride my bike 20km to a beach, and then home again. I’ve always imagined this is the draw for people who run regularly, using it as a period of thought and contemplation.

Knitting and other tasks, once some level of competence is achieved, must fulfill the same purpose as well.

Seven Tips For A Better You

So here’s where I have provide the easy solutions and trite pablum to make it seem like I’ve soundly wrapped everything up and made you better person for having read this.

I’m not going to do that. Instead I offer up that you should consider your own hobbies and activities, and determine what your thinking time is, and whether you’re robbing yourself of it.

And if you don’t have one, pick up some sort of hobby or pursuit to provide it (there’s a whole potential business domain around this, as an aside. There are many people who would pay for the privilege of doing manual labor just to give them a purposeful reason to do something and retain the mental capacity for deep thought). I’ve worked in several offices with “quiet thinking areas”, but no one ever actually used them to think (they universally became “make cell phone call” areas), and even if people tried, for most simply having no distractions does nothing to aid focus and might actually impede it.

Sitting with your eyes closed simply doesn’t work for most of us.

EDIT: A timely post appeared on Wired today – What Gives With So Many Hard Scientists Being Hard-Core Endurance Runners?  And to avoid the appearance of following a herd, my post went up at 5:37am (the dog woke me early), while their’s went up at 7am.

Why We Program / The Love of the Craft

I’ve been a professional software developer for 20 years.

I’ve built embedded software, beginning with a TSR IRQ-driven remote management solution for a single-board computer (to allow us to securely control it on-demand over a modem), to a QNX-based RTOS monitoring and control platform. I’ve designed and constructed data processing and aggregation solutions as Windows services, Linux daemons, and duct-taped batch processes. I’ve built Win32  and Win64 applications (native and managed), DCOM/COM/COM+ tiered solutions — remember when n-tier was the price of entry to professionalism? — CORBA and microservices. I’ve been a Microsoft guy, a Delphi guy, a C# guy, a database guy, a Linux guy, a security guy, a web application guy, and a high performance big data processing guy. I’ve even been a domainologist.

I’ve built mobile applications and mobile targeting platforms on iOS and Android, and Windows CE years ago.

I’ve slung considerable C, C++, (Object) Pascal, Go, Java, and C# professionally. I’ve toyed with countless others. I’ve designed and managed enormous systems and databases, including for a couple of Canada’s biggest corporations.

I’ve been involved with a variety of styles of teams, in various levels of structure and rigidity. In a banking group with absolutely draconian process and hierarchy. In an agile financial upstart essentially rolling with whatever fits, gnashing around to find a process that works. In a telecom company somewhere in between.

I’ve been a team leader, a vice president of a medium sized organization, a software architect, a “junior” and “senior” software developer (hilariously I got the latter title about two years into this profession, denoting the ridiculously shallow career path many firms have in the hands-on realm).

None of this is a brag, given that it is hardly brag-worthy: I have no silicon valley experience, being the sort that generally stays within some radius of his birthplace (currently about 200km). I have no starred github projects. I have never worked on shrinkwrap software, or a triple-A game. I don’t have an inbox full of recruiter solicitations. I’ve never written a book (though I have authored magazine articles, for what that’s worth).

The vast majority of my work has been for companies around the edge, and for a lot of my career I have been “a big fish in a little pond”. My solutions are critically important for the people I work for, but not that important for society at large.

None of that was intending to be aggrandizing, and there are countless far more impressive developers in this field.

Instead the purpose of those paragraphs is to say that I’ve had a lot of varied experiences in this business, from starting as a junior tasked with doing the grunt work (where being a lazy worker I automated a manual process into a solution that grew into substantial business for the company), to guiding the implementation and technology for a pretty large team, including laying the blueprints in code.

Over my career I’ve had many paths out of programming. Options to move to pure management, or to architecture management (which unfortunately doesn’t include much actual software architecture). Even a CTO offer for a mid-sized company, albeit one that used technology in a periphery sense, and where it mostly meant doing boring vendor comparisons and meet and greets.

I turned them all down. I remained a programmer, or in a mixed position where my day to day still included at least 50% hands on. During periods this meant going solo and pitching consulting to all takers, particularly when our children were young and the demands of the family were many, essentially taking a sabbatical with occasional engagements to pay the bills.

I of course weighed the pros and cons, and on the pro side of moving more to management is that the requirements are so much more fuzzy and abstract, versus the technical paths where many unenlightened shops are asking for a laundry list of very specific technologies.

I love solving problems. I love the craft that we ply. The raw, voice-squeaking joy when a rashly implemented solution using some cool (ergo fun) new language or framework or technology actual works and solves some problem remains a reality of my world. I couldn’t stand moving too far from it. My eldest son is now going down the same path, and from day to day he has gone from Unity and C# to Java, nodejs, and most recently Python, enjoying the challenge and excitement of being exposed to new ideas and patterns.

I didn’t and don’t want to move to pure management.

To a lot of people, this is hard to understand. One of the root causes of ageism in this field, I suspect, is that a lot of people really don’t or didn’t like doing it: If it feels like a burdensome chore, then why in the world would someone want to do it when they’re further in their career? I remember being in my 20s and having these discussions with full of themselves peers who were sure that they would be managers by 30, VPs by 40, CTOs by 50, and so on, following the traditional path of the 1950s man. There was a prevailing notion, and it’s still evident, that any who haven’t ascended in such a manner had obviously failed during the climb.

It is the manual labor mentality applied inappropriately to a field of intellectual excellence. You start in the mail room, and then…

An architect is still an architect. A doctor is still a doctor. A researcher still a researcher. An artist is still an artist and a musician is still a musician.

So why then do programmers have to ascend into managers?


Bokeh and Your Smartphone – Why It’s Tough To Achieve Shallow Depths of Field

I’m going outside of the normal dogma today to focus on a field that I am a bit of a hobbyist in: I’m sort of a photography nerd, primarily interested in the fascinating balance-of-compromises that are optics, so I’m in my domain a bit with this piece on photography and depth of field. There are many great articles on depth of field out there, but in this I’m primarily focused on the depth of field issue of smartphones, and the often futile quest for bokeh.

Bokeh is the photography effect where background lights and details are diffuse and out of focus, as seen in these Flickr photos. Often it’s contrasted against a sharply focused foreground subject, providing an aesthetically pleasing, non-distracting backdrop.

To photographers this has always been called a shallow depth of field: with a large aperture and a longer focal length it is the normal technique to isolate a subject from the background, and is the mainstay of photography. The term “bokeh” has taken root among many courtesy of a late-90s photography how-to magazine article, so you’ll come across it frequently. Some purists hold it to only talk specifically about blurred light patterns, while in general parlance it just means “out of focus background”.

The best known mechanism to control the depth of field on a given piece of imaging hardware is the aperture (aka the f-stops) and distance to subject (the closer to the lens, the shorter the depth of field), each new device seemingly offering a wider aperture to enhance the options.

The Nexus 6p has an f/2 capable camera. The iPhone 6S offers f/2.2 29mm equivalent, and the new iPhone 7 pushes new boundaries with an f/1.8 lens (35mm equivalent, with a second 56mm equivalent on the 7+).

The ultimate portrait lens in the 35mm world is an 85mm f/1.4 lens.

On a traditional SLR-type camera, f/1.8 is a very wide aperture (aside – a “small” aperture has a larger number, e.g. f/22, while a wide or “large” aperture has a smaller number, such as f/1.8). If the scene isn’t too bright, or you have some neutral filters handy, it is gravy for a dish full of bokeh.

By now you’ve probably learned that it’s really hard to achieve shallow depths of field with your smartphone unless the subject is unreasonably close to the device (e.g. fish eye distortion of someone’s face), despite those seemingly wide apertures: Most everything is always in focus, so while there isn’t the once endemic problem of the slightly out of focus shots (being sort of close in focus is often good enough), it makes the cool effects tough to achieve. Instead blurry smartphone photography is primarily caused by too low of a shutter speed coupled with moving subjects or a shaky photographer.

But why is that? Why does your f/2 smartphone yield such massive depths of field, making bokeh so difficult? Why isn’t f/2 = f/2? If you’re coming from the SLR world and install some great manual control photography app on your smartphone, you likely found yourself disappointed that your f/2 smartphone isn’t delivering what you’re accustomed to elsewhere.

Focal length is why you are separated from your bokeh dreams. While my Nexus 6p has a 28mm equivalent (compared to the 35mm camera benchmark) focal length, it’s actually a 4.67mm focal length. Courtesy of the physics of depth of field, its focal length means an f/2 on this device is equivalent to about an f/10 DoF on a 35mm lens when the subject is at the same distance from the lens. The iPhone 6 has a focal length of 4.15mm, while the iPhone 7 offers up lenses of apparently 3.99mm and 7.7mm.

This is easy enough to prove. Here’s an f/2 photo on my Nexus 6p. The subject is about 30cm from the lens.


Now here’s approximately the same scene, at the same distance, with a zoom lens set at ~28mm on a Canon T2i (approximating the zoom level of the Nexus 6p fixed focal length), the aperture set to f/10.

Canon T2i f/10 @ ~28mm

While each device has its own post-processing (the T2i in this case is set to neutral, while the 6p, like most smartphones, is fairly heavy handed with contrast and saturation), if anything the SLR features as much or more blurring, despite a significantly smaller aperture.

This is the impact of the focal length on the depth of field. Here the same subject shot from the same distance, the zoom lens set to 55mm, the aperture still at f/10. The depth of field collapses further (it isn’t just a crop of the above picture, but instead the DoF shrinks further).

Canon T2i @ 55mm f/10

And for comparison here it is at f/5.6-

T2i - 55mm f/36

So why is this?

First let’s talk about something called the circle of confusion (CoC) to get it out of the way as a parameter of the calculator following. In this discussion the CoC is the amount that a “focused” photon can stray outside of the idealized target before it leads to blur for a given sensor. There are many, many tables of static CoC values, and a lot are very subjective measures (e.g. “if you print this as an 8×10 photo and view it from 3 feet away, what amount of blur is indiscernible). For my calculations I am calculating the CoC as 2 x the pixel stride of the target sensor (via the nyquist theory), but you can use a table or your own mix of crazy as the CoC. I leave that open.

The Nexus 6p has a sensor that is 6.324mm wide, containing 4080 pixels per line (not all pixels are active, so this was measured via the SDK). So a pixel stride of 0.00152794mm, and doubling that we get 0.0030558. That is the CoC I’m using for the Nexus 6p.

We know the focal length (4.67mm), and we know the desired CoC (0.0030558), so let’s calculate something called the hyperfocal distance.

The hyperfocal distance is the focus distance where everything to infinity, and to approximately 1/2 the focus distance, will also be in effectively perfect focus for a given aperture. It is a very important number when calculating DoF, and the further the hyperfocal distance, the shallower the DoF will be for closer subjects.


Focal length (mm)
CoC (mm)
Hyperfocal Distance (mm)


Now we know that the hyperfocal distance is No JavaScript? meters for these parameters, and if you change the f-stop, the focal length, or the CoC, the hyperfocal distance will recalculate accordingly. What that means is if a focused subject is that distance from the lens, at those settings, the furthest distances (the mountains on the horizon, the stars in the sky, etc) will still be completely in focus, as will everything from about half the distance before the focus distance as well. It is the hyper-of-focuses, and is a critical number for landscape photographers.

Focusing beyond the hyperfocal distance does nothing to improve distant focus for this CoC, but instead simply unfocuses closer objects. Once again I have to note that CoC is not a fixed constant, and if you had a sensor with 4x the pixels, the CoC by my method would halve and the focus would need to be more precise. Others would argue, with good reason, that the CoC should be percentage of the total span such that the same effect amount is seen across devices, while my measure is achieving technical perfection for a given device.

The hyperfocal distance is the basis of the calculations that allow us to calculate the near and far of the depth of field. Let’s calculate the DoF for a given focus distance. Note that these values are in millimeters, as most things are in the photography world (so instead of 10 feet, enter 3048).

Subject distance (mm)
Near Depth of Field (mm)
Far Depth of Field (mm)

Beyond the near and far depth of field, of course the defocus increases as a multiple of the distance.

If you entered a focal length of 4.67, a CoC of 0.0031, an f-setting of 2.0, and a subject distance of 300 (30cm — the distance in the above picture), the near and far depth of field would calculate to about 276.454 mm to 327.931 mm, meaning everything within that distance from the camera should be focused perfectly on the device, and the further out of those ranges the more defocus is evident. Altering those values for the SLR, with a focal length of 28, a CoC of 0.0086 (the SLR has a much larger sensor), and an f-setting of 10.0, with the same subject distance of 300, yields a smaller depth of field of 290mm to 310mm. A significantly smaller aperture, yet an increased amount of bokeh at a given distance.

All f-stops are not created equal, which is why Apple is artificially simulating bokeh on their newest device (as have other vendors). Your f/1.8 smartphone might provide light advantages, but don’t expect the traditional depth of field flexibility. On the upside, this is the reason why almost all smartphone photography is sharp and in focus.

I love using my smartphone to take photos (or stabilized videos): It is the camera that is always with me, and an enormous percentage of the shots turn out great. When I’m looking for defocus effects I reach for the SLR, however.

h.265 (HEVC) Encoding and Decoding on Android

I periodically give some attention Gallus1 (an awesome in every way stabilized hyperlapse app for Android, doing what many said was impossible), tonight enabling h.265 (HEVC) support for devices surfacing hardware encoding/decoding for that codec, still packaging it in a traditional MP4 container.

The Nexus 6p, for instance, has hardware HEVC encoding/decoding via the Snapdragon 810 (one of the benefits over the 808), however it was inaccessible for third-party developer use until the Android N (7.0) release. I had done trials of it through some of the 7.0 betas, however until recently it was seriously unstable. With the final release it seems pretty great.

h.265 is a pretty significant improvement over the h.264 (AVC) codec that we all know and love, promising about the same quality at half the bitrate. Alternately, much better quality at the same bitrate. It also features artifacts that are arguably less jarring when the compression is aggressive or breaks down. And on a small, mobile processor it’s encoded efficiently in real time at up to 4K resolutions.

One aspect of Android development that might surprise many is just how fragile some of the subsystems of the platform are. If you fail to utilize the camera API in specific patterns and processes (I am not talking about documented demanded behaviors, but rather discovered quirks of operation, where some sequences of events require literal imperative pauses, discovered through trial and error, to avoid defects), the entire camera subsystem will crash and be unrecoverable by any means other than a full restart. The same is true of the h.265 encoder and decoder on the Nexus 6p, and in implementing the new functionality I had to restart the device a dozen+ times to recover from subsystem failures as I massaged the existing code to yield good behavior from the new codecs. Ultimately I find Android to be an incredible, amazing platform, but it remains surprising that so much of it is perilous native code houses of cards with a very thin API.

1 – I’ve never paid much attention to the Play Store listing, and it still features screenshots from a much the primitive (in UI, not in algorithmic awesomeness) initial version, and I’ve never made a tutorial or real demonstration (it is absolutely incredible on the Nexus 6p). It always seemed like there was one more thing I wanted to fix before I really made a deal out of it. But it is pretty awesome, and if someone wants to volunteer some sales material, I’d more than welcome it.

Strive For Technical Debt

Technical debt is the differential between the solution you have in hand, and the idealized hindsight-refined vision of what it could be: If there’s a solution solving problems, there are people telling any who’ll listen about all of the mistakes made in its implementation.


This debt is lamented on programming boards everywhere as the folly of those who came before. Not only as the cheap critique of others, but we often criticize our own implementations when we had to take shortcuts due to time constraints, help was unavailable, we didn’t entirely understand the platform or the requirements, tooling was limited, etc.

Pragmatic developers — or the ones with the actual task of actually creating solutions — view it as a “how the sausage is made” kind of reality, while others view it as a mistake that shouldn’t have happened, and with proper care and control we could learn and avoid it in the future. In the imaginary view of projects, we’d ideally build with the planning, time, knowledge and resources to never have technical debt in the first place.

But that’s nonsense the world over, across almost every strata of projects. Code starts ugly, universally. It’s a basic truth of the industry.

There are no exceptions. If you live under delusions that your current solution isn’t ugly, be sure to revisit this assessment in a year. And when a project really tries to circumvent this, it invariably ends up in analysis paralysis, nothing of consequence generated after person-decades of development.

And this idealized hindsight suffers from a severe survivorship bias. The idealized “make-it-right-from-the-outset” projects that all failed and fell by the wayside years before get forgotten, essentially lost in time, but instead we remember and study the process leading to the clunky duct-tape project that has run the organization for the past decade, hanging around long enough to be resented and belittled.

Technical debt almost always comes with success. It comes with a project becoming important enough that it matters, drawing attention and considerations and criticisms. It’s exactly those projects that are good enough that we find it hard to rationalize replacements (this is where dogma diverges from reality), and when we do the second-system effect is in full effect, our efforts to justify new projects leading us to overwrought, overly ambitious efforts.

Celebrate if you have technical debt to complain about. It usually means you’re sitting on something successful enough to matter.

Remember When The Suburbs Were Doomed?

Software-development focused online communities skew male, and generally younger (e.g. 20s to mid 30s). Most live in dense urban areas (the Bay, Seattle, NYC, London, etc), often in smallish apartments and condos. Few have families.

As is a side effect of the human condition, there is a tendency to cast one’s own lot as either the inevitable situation for most (e.g. people who have it different from me are living on borrowed time), or as if one’s personal situation is a more righteous, principled choice (better for the planet, development, futurology, etc. This is an observation I personally learned seeing myself do exactly these rationalizations over time).

Stories declaring or predicting the end of the suburbs always do well on these forums. I’ve seen poorly researched stories predicting this topping sites like Hacker News again and again: It tells the audience what they want to hear, so the burden of proof disappears.

But this assumption that suburbs are doomed has always struck me as counter-intuitive. While there has been a significant migration from rural areas to urban areas in virtually every country of the world (largely as rural workers are effectively displaced by factory farming or obsolete skillsets and it becomes a basic survival pattern), suburbs seem to be as bulging as they’ve ever been. In the United States, for instance, the fastest growing areas of the country have been medium density (e.g. suburbs and small cities). Very high density areas have actually been dropping much faster than even rural areas have.

The suburbs aren’t actually dying.

But maybe they will soon? The argument that they will is completely predicated on a single thing happening: We’re going to run short on oil, transportation costs are going to explode, and suddenly the onerous costs of living in far flung places is going to cause mass migration to the city centers, everyone defaulting on their giant suburban McMansion mortgage, the rings turned into a desolate wasteland.

Increasingly it seems like we’re more likely to keep oil in the ground than to run out. Alternatives to this easy energy source are picking up pace at an accelerating rate. As electric vehicles hit the mainstream, they’re becoming significantly more economically viable options for high mileage drivers (fuel for electric cars costs somewhere in the range of 1/5th per mile compared to gasoline, even in high cost areas like California). Where the miles are much cheaper from some solar panels or a windmill than they are a gallon of gasoline, even at the current depressed prices. And that’s excluding the significant mechanical costs of internal combustion engines that would soon be dramatically undermined by mass produced electric vehicles.

You can go much further for less than ever before, with the specter of oil’s decline being less and less relevant. If anything transportation is going to get a lot cheaper.

Of course the commute itself has always been a tax on life, and personally I can say that I quit jobs after doing the grueling big city commute. Only we’re on the cusp of our car doing the driving for us. Very soon the drive will be quality time to catch an episode of a good show, or maybe a quick nap. The capacity of even existing roadways will dramatically increase once you remove human failure and foible.

Connectivity…well everyone everywhere is connected, incredibly inexpensively. When I was a kid we had to pay $0.30/minute+ to talk to people 20km away. Now continent wide calling is free. Internet connectivity is fast and copious almost anywhere. Many of us work remote jobs and it really doesn’t matter where we are.

We’re virtual.

I’m not making any predictions or judgments, but the inputs to the classic assumptions has changed enormously. I recently was entertaining the idea of living even more remote (right now I work at home in a rural area exurb of Toronto — this doesn’t qualify as even suburbs — but there are of course far more remote areas of this country), and it’s incredible how few of the factors are really compromised anymore: I’d still have 100s of channels, high speed internet, 24/7 essentially free communications (text, audio, video, soon enough 360 vision at some point with depth) with every family member, overnight delivery of Amazon Prime packages, etc.

Being in a less dense area just isn’t the compromise it once was. And that’s talking about fully rural areas.

The suburbs — where you still have big grocery stores and bowling alleys and neighborhood bars and all of the normal accouterments of living — just aren’t much of a compromise at all. When someone talks up the death of the suburbs, I marvel at the 1980s evaluations of the 2010s world. I would argue the contrary: a few communicable disease outbreaks (e.g. SARS v2) and humans will scurry from density.

Facebook Instant Articles

Both Google and Facebook introduced their own lightweight HTML subsets: AMP and Instant Articles, respectively. I mentioned AMP on here previously, and via an official WordPress plugin it’s accessible by simply appending /amp on any page’s URL. Both impose a restrictive environment that limit the scope of web technologies that you can use on your page, allowing for fewer/smaller downloads and less CPU churn.

The elevator pitch for Facebook’s Instant Articles is an absolutely monster, bringing an i5-4460 to its knees by the time the page had been scrolled down. There’s a bit of an irony in the pitch for a lightweight, fast subset of HTML being a monstrous, overwrought, beastly page (the irrelevant background video thing is an overused, battery sucking pig that was never useful and is just boorish, lazy bedazzling).

I’m a user of Facebook, with my primary use being news aggregation. As many content providers all herded in line to the platform, I stopped visiting their sites and simply do a cursory browse of Facebook periodically: BBC, CBC, The Star, Anandtech, The Verge, Polygon, Cracked, various NFL related things, and on and on. On the whole I would wager that these sites engaged in a bit of a Tragedy of the Commons racing to the Facebook fold, though at some point the critical mass was hit and it became necessary to continue getting eyeballs.

The web is ghetto-ized and controlled.

More recently Facebook started corralling mobile users to their own embedded browser (for the obvious overwatch benefits). And now they’re pushing publishers to Instant Articles.

But the transition isn’t clean. Many sites are losing most embedded content (Twitter embeds, social media). Lots of pages are pulling up, curiously, malformed XML errors. It is accidentally acting as an ad blocker on many web properties, the sites unintentionally squashing their own ads, filtered out as non-Instant Article compliant.

It’s interesting how quietly this is all happening. This once would make for pretty big tech news (especially Facebook embedding the browser). Now it’s just a quiet transition.

A Decade of Being the World’s Pre-Eminent Domainologist

It’s been 10 years since I had a brief moment of glory1 with the whole domain names hoopla. At that time I had posted a lazy analysis of the root .COM dump, and it satisfied a lot of people’s curiosity. It was linked widely, eventually leading to an NPR radio interview, a front-page article in the Wall Street Journal (okay…front of the second section), and a variety of radio and news interviews.

It was an educational experience for me because I had been doing the blogging thing for a while, largely deeply technical pieces, seeing little success and no uptake2. Then I posted something that seemed like a bit of superficial fluff and saw enormous success (from a blogging sense, if readers and links are the measure of worth). I still see several dozen visitors a day coming from random links around the tubes to those old domain name entries.

I posted a brief follow-up to it then. After the attention, countless others tried to grab some of that magic, an industry of data scientists suddenly aware that you could access the zone files (there’s also the CZDS for the wide plethora of new TLDs).

As mentioned then, there was really no personal advantage in it for me.

I don’t do anything professionally related to domain names (beyond, of course, operating on the web, but that applies to almost everyone now), so I couldn’t spin it into some sort of product or service.

Career wise it was a wash: I was working at a small financial services company and it was a bit of an ego boost when the partners were excited to see their name in the Wall Street Journal. That was apparently quite a coup, and they got calls from various clients who all started their day with the WSJ. Which was just random chance, as I had literally just started working with them after a period doing the consulting thing (I’ve gone back and forth during my career, the demands of family life dictating changes of positions).

But it was fun, which was all I really intended from it. Still have a couple of copies of the WSJ issue in a drawer somewhere. Can’t believe it’s been ten whole years.

1 – I had some brief exposure for promoting SVG in a period when it was floundering: I managed to convince Microsoft to run it in an issue of their MSDN Magazine, which led to quite a lot of chatter and excitement about the technology, many seeing this as Microsoft endorsement (at the time the company was pushing the competing VML, and Flash seemed to be strangling SVG). I also proposed a change to Firefox that made it much more competitive in artificial benchmarks and some real world situations. Those are my pretty middling claims to fame, as most of my career has been doing hidden stuff for a shadowy cabal of companies. The little public bits of exposure were a delightful change.

2 – One of the things about “blogging” that I noted early on is that you have to have a niche or common perspective to pander to a crowd. Be a C# guy talking about C#, and why C# and .NET is the greatest. A Go guy talking about Go. An Apple booster heralding Apple’s choices, and criticizing opponents consistently and predictably. Like politics in technology, you need to align with a group, pandering to their beliefs, and they’ll carry you on their shoulders.

But technology is seldom so narrow, and few choices aren’t a perilous mix of pros and cons.

If you don’t stick to a niche you need to make “easy listening” sorts of articles, which the DNS entry satisfied (which has the added advantage that they’re dramatically easier to write).

Alternately — and the best option of all — just be a really good writer making great content. I don’t satisfy that requirement, so I’m somewhere between niche and easy listening.

Provably Fair / Gaming

A little bit of a diversion today, but just wanted to belatedly post a bit of commentary on the whole recent game/virtual item gambling controversy.

EDIT: 2016-07-14 – Shortly after I posted this, Valve announced that they were going to start shutting off third party API access if it’s used for gambling (no I’m not claiming they did this as a result of me posting, but rather just noting why I didn’t mention this rather big development below). This morning Twitch essentially also banned CS:GO item gambling (though they’re trying to avoid any admission of guilt or complicity by simply deferring to Valve’s user agreements).

Like all of you, work and family demands leave little time for gaming. One of the few games I enjoy — one that allows for short duration drop-in sessions and has been a worthwhile mental diversion when dealing with difficult coding problems — is Counter-Strike Global Offense (CS:GO).

The game is a classic twitch shooter. It has a very limited, curated set of weapons, and most rounds are played on a limited number of proven maps.

I’m a decent player (though it was the first game where I really had that “I’m too old for this” sense, with my eleven year old son absolutely dominating me). It’s a fun, cathartic diversion.

Nine games out of ten I end up muting every other player as the player base is largely adolescent, and many really want to be heard droning on. The worst players seem to be the most opinionated, so with every match the guy sitting at the bottom in points and frags always has the most to say about the failure of everyone else’s gameplay (this is an observation that holds across many industries, including software development. This industry is full of people who’ve created nothing and achieved little explaining why everyone else is doing it wrong).

The CS:GO community also has an enormous gambling problem, as you may have heard. This came to a head when a pair of popular YouTubers were outed as owners of a CS:GO skin gambling site. These two had posted a number of arguable “get rich….quick!” type videos demonstrating very highly improbable success, enticing their legions of child fans to follow in their possibly rigged footsteps.

Skins, to explain, are nothing more than textures you apply to weapons. The game often yields situations where other players spectate your play, and having unique and less common skins is desirable as a status thing. So much so that there is a multi-billion dollar market of textures that people will pay hundreds of dollars for (Steam operates a complex, very active marketplace to ensure liquidity).

The whole thing is just dirty and gross, with Valve sitting at the center of an enormous gambling empire mostly exploiting children all spending those birthday gift cards. It casts a shadow over the entire game, and those awaiting Half Life 3 will probably wait forever, as Valve seems to be distracted into only working on IP that features crates and keys.

The machinations of crates and keys, winning rewards that Valve provides a marketplace denominated in real currencies, is gambling: if you’re paying real money for small odds of something worth more money (again, Valve provides the marketplace and helpfully assigns the real-world value), it’s a matter of time before the hammer falls hard on these activities. Valve is operating in a very gray area, and they deserve some serious regulatory scrutiny.

Anyways, while being entertained by that whole sordid ordeal, the topic of “fair” online gambling came up. From this comes the term “provably fair”, which is a way that many gambling enterprises add legitimacy to what otherwise might be a hard gamble to stomach.

It’s one thing to gamble on a physical roulette wheel, but at least you know the odds (assuming the physics of the wheel haven’t been rigged…). It’s quite another to gamble on an online roulette wheel where your odds of winning may actually be 0%.

“You bet black 28….so my `random’ generator now picks red 12…”

So the premise of provably fair came along. With it you can generally have some assurance that the game is fair. For instance for the roulette wheel the site might tell you in advance that the upcoming wheel roll — game 1207 — has the SHA1 hash of 4e0fe833734a75d6526b30bc3b3620d12799fbab. After the game it reveals that the hashed string was “roaJrPVDRx – GAME 1207 – 2016-07-13 11:00AM – BLACK 26” and you can confirm that it hashes and that the spin didn’t change outcomes based upon your bet.

That’s provably fair. It still doesn’t mean that the site will ever actually payout, or that that they can’t simply claim you bet on something different, but the premise is some sort of transparency is available. With a weak hash (e.g. don’t use SHA1. That was demonstrative) or a limited entropy checked string it might allow players to actually hack the game. To know the future before the future.

You can find provably fair defined on Wikipedia, where the definition is suspect, seemingly posted by someone misusing it and being called on it (“it is susceptible to unscrupulous players or competitors who can claim that the service operator cheats” What?)

Anyways, the world of CS:GO gambling is a bit interesting to evaluate the understanding of the term provably fair.

csgolotto, the site at the center of all of the hoopla, does little to even pretend to be provably fair. Each of their games randomly generate a percentage value and then a hash with the value and a nonce is provided, but that does nothing to assure fairness: For the duels the player chooses a side. If the predetermined roll — which an insider would obviously easily know — was below 50, someone with insider knowledge could simply choose the below 50 side, and vice versa. Small betting differences slightly change the balance, but it has no apparent guards against insider abuse, and it’s incredible that anyone trusted these sites.

The pool/jackpot game relies upon a percentage being computed for a game — say 66.666666% — and then as players enter they buy stacked “tickets”, the count depending upon the value of their entries. So player 1 might have tickets 1-100, player 2 tickets 101-150, and player 3 tickets 151-220. The round expires and the 66.6666% ticket is #146, so player 2 wins the pot.

A variety of other CS GO gambling sites1 use the same premise. There is nothing provably fair about it. If an insider knows that a given jackpot win percentage is 86%, it is a trivial exercise to compute exactly how many tickets to “buy” to take the pot, at the right time, with the technical ability to ensure the final entry. It is obvious when to bow out of a given pool.

Some sites have tried to mix this up further, but to a tee each one was easily exploitable by anyone with insider knowledge.

There is nothing provably fair about it.

1 – I had a couple of illustrative examples of extra dubious claims of “provably fair”, including a site that added hand-rigged cryptography that actually made it even less fair for players. Under the scrutiny and bright lights, a lot of these sites seem to have scurried into the dark corners, shutting down and removing themselves entirely from Google Search.

We Must Stop Jealously Citing the Dunning-Kruger Effect

What the Dunning-Kruger study actually demonstrated: Among 65 Cornell undergrads (ergo, a pretty selective, smart bunch to start, likely accustomed to comparing well), the “worst” performing thought their performance would be slightly above the average of the group, while the best performing thought their performance would be highest of all. The average performing thought their performance would also be above average.

The participants had no measure to compare against each other, but from a general perspective were likely, to an individual, far above the normal population average. They also had the difficult task of ranking not by actual performance, but by percentile: It was a situation where one could score 95 out of 100 on a difficult assignment and still end up in at the bottom of the percentile ranking. As a group that shared enormous commonalities (same academic background, life situation, all getting into an exclusive school), there is no surprise that self-evaluations compressed towards the center.

What many in this industry endlessly think the Dunning-Kruger study demonstrated1: People who think their performance is above average must actually be below average, and the people who think they are average or below must actually be above average (the speaker almost always slyly promoting their own humility as a demonstration of their superiority, in a bit of an ironic twist. Most rhetoric is self-serving). The shallow meme is that people with confidence in their abilities must actually be incompetent…Dunning-Kruger and all, right?

Cheap rhetoric turns cringe worthy when it’s cited to pull down others. Do a search for Dunning-Kruger cites on developer forums or blogs and you’ll find an endless series of transparent attempts to pitch why the speaker is better than anyone else.

No one has ever gained confidence, position or ranking by projecting some myth that they think undermines the people who have it. It just makes the speaker look worse. The same thing can be seen in competitive online gaming like CS:GO where everyone better than the speaker is a hacker/spends too much time playing the game, and everyone worse is just naturally less skilled and should delete the game they’re so terrible. It’s good for a laugh, at least until it gains truth through repeated assertion.

This is one of those posts that if you’re a blogger isn’t a good way to grow subscribers: Invariable there are some readers who spend their professional life calling everyone “below” them hacks, and everyone “above” them hacks who suffer the Dunning-Kruger effect. It’s common on any developer related forum. Eh. Thankfully I don’t care about reader numbers.

1: At least one author of the study is complicit. This study quite literally made them famous, and scientists and researchers are people too, so often it’s best to just go with the flow and allow people to puff it up to assuage their own insecurities.