Updates: Pay Apps / Date-Changing Posts / Random Projects

Recently a couple of readers noticed that some posts seemed to be reposted with contemporary dates. The explanation might be broadly interesting, so here goes.

I host this site on Amazon’s AWS, as that’s where I’ve done a lot of professional work, I trust the platform, etc. It’s just a personal blog so I actually host it on spot instances — instances that are bid upon and can be terminated at any moment — and there was a dramatic spike late in the week on the pricing of c3 instances, exceeding my bid maximum. My instance was terminated with extreme prejudice. I still had the EBS volume, and could easily have replicated the data on the new instance for zero data loss (just a small period of unavailability), however I was just heading out so I just ramped up an AMI image that I’d previously saved, posted a couple of the lost posts from Google cache text, and let it be. Apologies.

Revisiting Gallus

Readers know I worked for a while on a speculative app called Gallus — a gyroscope-stabilized video solution with a long litany of additional features. Gallus came close to being sold as a complete technology twice, and was the source of an unending amount of stress.

Anyways, recently wanted a challenge of frame-v-frame image stabilization and achieved some fantastic results, motivated by my Galaxy S8 that features OIS (which it provides no developer accessible metrics upon), but given the short range of in-camera OIS it can yield an imperfect result. The idea with be a combination of EIS and OIS, and the result of that development benefits everything. I rolled it into Gallus to augment the existing gyroscope feature, coupling both for fantastic results (it gets rid of the odd gyro mistiming issue, but still has the benefit that it fully stabilizes with highly dynamic and complex scenes). Previously I pursued purely a big pop outcome — I only wanted a tech purchase, coming perilously close — but this time it’s much more sedate in my life and my hope is relaxed. Nonetheless it will return as a pay app, with a dramatically simplified and optimized API. I am considering restricting it only to devices I directly test on first hand. If there are zero or a dozen installs that’s fine, as it’s a much different approach and expectation.

Project with my Son

Another project approaching release is novelty app with my son, primarily to acclimate him to “team” working with git. Again expectations are amazingly low and it’s just for fun, but might make for the source of some content.

A Decade of Being the World’s Pre-Eminent Domainologist

It’s been 10 years since I had a brief moment of glory1 with the whole domain names hoopla. At that time I had posted a lazy analysis of the root .COM dump, and it satisfied a lot of people’s curiosity. It was linked widely, eventually leading to an NPR radio interview, a front-page article in the Wall Street Journal (okay…front of the second section), and a variety of radio and news interviews.

It was an educational experience for me because I had been doing the blogging thing for a while, largely deeply technical pieces, seeing little success and no uptake2. Then I posted something that seemed like a bit of superficial fluff and saw enormous success (from a blogging sense, if readers and links are the measure of worth). I still see several dozen visitors a day coming from random links around the tubes to those old domain name entries.

I posted a brief follow-up to it then. After the attention, countless others tried to grab some of that magic, an industry of data scientists suddenly aware that you could access the zone files (there’s also the CZDS for the wide plethora of new TLDs).

As mentioned then, there was really no personal advantage in it for me.

I don’t do anything professionally related to domain names (beyond, of course, operating on the web, but that applies to almost everyone now), so I couldn’t spin it into some sort of product or service.

Career wise it was a wash: I was working at a small financial services company and it was a bit of an ego boost when the partners were excited to see their name in the Wall Street Journal. That was apparently quite a coup, and they got calls from various clients who all started their day with the WSJ. Which was just random chance, as I had literally just started working with them after a period doing the consulting thing (I’ve gone back and forth during my career, the demands of family life dictating changes of positions).

But it was fun, which was all I really intended from it. Still have a couple of copies of the WSJ issue in a drawer somewhere. Can’t believe it’s been ten whole years.

1 – I had some brief exposure for promoting SVG in a period when it was floundering: I managed to convince Microsoft to run it in an issue of their MSDN Magazine, which led to quite a lot of chatter and excitement about the technology, many seeing this as Microsoft endorsement (at the time the company was pushing the competing VML, and Flash seemed to be strangling SVG). I also proposed a change to Firefox that made it much more competitive in artificial benchmarks and some real world situations. I contributed to a variety of privacy efforts, including making people aware of disclosing metadata on JPEG photographs, with a very popular tool to remove it (which remarkably still sees almost 100 direct downloads each day, and an unknown indirect number), and contributions to various privacy related projects. Those are my pretty middling claims to fame, as most of my career has been doing hidden stuff for a shadowy cabal of companies. The little public bits of exposure were a delightful change.

2 – One of the things about “blogging” that I noted early on is that you have to have a niche or common perspective to pander to a crowd. Be a C# guy talking about C#, and why C# and .NET is the greatest. A Go guy talking about Go. An Apple booster heralding Apple’s choices, and criticizing opponents consistently and predictably. Like politics in technology, you need to align with a group, pandering to their beliefs, and they’ll carry you on their shoulders.

But technology is seldom so narrow, and few choices aren’t a perilous mix of pros and cons.

If you don’t stick to a niche you need to make “easy listening” sorts of articles, which the DNS entry satisfied (which has the added advantage that they’re dramatically easier to write).

Alternately — and the best option of all — just be a really good writer making great content. I don’t satisfy that requirement, so I’m somewhere between niche and easy listening.

Link Rot and Permanent Redirects

I’ve changed “engines” several times over the lifespan of this blog, and have even changed TLDs a couple of times, moved from HTTP to HTTPS, along with several URL scheme changes. I was generating heaps of link rot: Links from wikis and Stack Overflow, from blogs like Atwood’s Coding Horror, and media and countless message boards going to 404s.

In my most recent move — from a very efficient blogging engine that I wrote, primarily to facilitate running on the tiniest server imaginable, moving to a WordPress blog for content management advantages — I leveraged a significant number of nginx rewrite rules to try to avoid this problem. Rules crossing domains, differing URL structures, redirecting RSS feed users, etc. Rules leveraging perl to decompose URLs and recast them to their more contemporary form.

For all of these deprecated URLs I served up a permanent redirect saying to every caller “The URL you actually want is over there. Use it from this point forward.”

But of course the static links on other sites (e.g. in comments on StackOverflow) never change, forever pointing to the original, time-limited link, awaiting the day they inevitably turn to link rot. In an ideal world these static content sites would have a perpetual bot validating links, updating them to contemporary forms where appropriate, or flagging them as unavailable when they turn to rot (or when they revert to placeholder pages as domains get scooped up after being abandoned).

The static content is a bit more of an issue, but what really surprises me, though, are the number of RSS readers and other automated consumers that just completely ignore permanent redirects, treating every one in an immediately forgotten fashion. Literally years after I switched to this platform, serving up 301 permanent redirects to millions of requests in the meantime, these readers still keep slamming on the no longer available URLs.

No longer available because I finally dropped the rewrite rules. They added complexity and risks to the ruleset, and required a module that I didn’t want to install now that I switched to the mainline version of nginx (primarily to have some fun with the HTTP/2 functionality).

If you’ve hit the root page of this blog because you were sent to one of these very dated links — I apologize. The web’s mechanism of dealing with link change is far from ideal.

Hubris -or- When All Possible Outcomes Prove Your Prescience

My day to day is filled with Windows, OSX, Android, iOS, and Linux: There is rarely a day when I don’t spend substantial time on all them, often simultaneously.

I am typing this on a Windows VM running on a Linux server, connected through VNC to an OSX box, an SSH session sitting open on the side to a Ubuntu server. It’s an average day.

And they are all excellent. They all have their place, and there are substantial overlaps in the great Venn diagram of utility.

My favorite tablet remains my iPad. My favorite smartphone my Nexus 5. My preferred development OS is actually Windows. My preferred server and infrastructure platform is Linux.

But I try to avoid the noisy platform wars: There are camps of people waving flags and yelling slogans, and it’s just…unpleasant. What’s the point? There are people who actually make it their profession to wave the flag.

Alas, today I unintentionally caught an entry from Benedict Evans that purports to describe the “next phase of smartphones”. It is making waves among the usual suspects.

In it Mr. Evans states

Hence, WWDC was all about cloud as an enabler of rich native apps, while the most interesting parts of IO were about eroding the difference between apps and websites. In future versions of Android, Chrome tabs and apps appear together in the task list, search results can link directly to content within apps and Chromebooks can run Android apps – it seems that Google is trying to make ‘app versus web’ an irrelevant discussion – all content will act like part of the web, searchable and linkable by Google. Conversely for Apple, a lot of iOS 8 is about removing reasons to use the web at all, pulling more and more of the cloud into apps, while extensions create a bigger rather than smaller gap between what ‘apps’ and ‘web sites’ are, allowing apps to talk to each other and access each others’ cloud services without ever touching the web.

Unlike the previous differences in philosophy between the platforms, which were mostly (to generalise massively) about method rather than outcome, these, especially as they evolve further over time, point to basic differences in how you do things on the two platforms, and in what it would even mean to do specific tasks on each.The user flows become different. The interaction models become different. I’ve said before that Apple’s approach is about a dumb cloud enabling rich apps while Google’s is about devices as dumb glass that are endpoints of cloud services.

Being involved in the VC community, Mr. Evans is likely surrounded by sycophants who will cheer on his utterances, never calling him when it makes absolutely no sense at all.

This is one of those cases. He is twisting reality to fit his preconceived narrative, and I suspect that any announcement of the two companies would still somehow support his visions.

Apple and the Web

iOS has long been a *stellar* supporter of the web. If we really need to humor the notion that the incredible richness of the web platform can be described as “dumb glass”, Apple has gone full dunce. They’re the head of the dunce class.

iOS features fantastic support for rich, emergent web technologies, many added long before Android. It allowed web apps to behave like native apps, a model that Android later copied to limited success. It continues to offer industry leading mobile performance for most categories of web tasks.

When you add such a web app to your home screen (e.g. http://names.yafla.com – Add to Home Screen), it appears in your recently used apps just like a native app.

How about the new ability of apps to open on the clicking of a link in the browser — another hint, Evans holds, that Google is moving the world to that dumb glass future. iOS has had that for years. And just like Android, iOS watches for a magic list of HTTP links, kicking them off into the native app.

Android L improves the state ever so slightly on Android in that you no longer need to register a specific protocol namespace and customize your web server side presence, but can instead use URL expressions registered by the app (e.g. “my native app is now in charge of all http://dennisforbes.ca links”), but ultimately that’s a refinement, and a broadening of what Apple did years earlier with their iTunes URL identification and app-ification.

Regardless, I’m at a loss to understand how opening native apps from web links, instead of in the browser, portend this “dumb glass” future. It seems more like the exact opposite?

Maybe instead it’s that iOS has added app-to-app data sharing (aka intents and registered content handlers), uri handlers, and rich notifications for native apps? Android has had those for years. Widgets and Fragments? Android, again, had those for years.

Cloud storage and notifications? Both Android and iOS have them. Both are pretty much synonymous.

How about high-performance graphics? Unsurprisingly, both iOS and Android have announced options.

Two makers, on largely identical paths, doing almost all of the same things. Both improving their product through innovation, and occasional poaching from their competitors. If there is any truth that WWDC featured more native app level improvements (but many huge web improvements), and I/O featured more web/”cloud” improvements (and a huge number of native app boosts), it’s that each maker was trying to catch up in areas where they are lagging their foe.

Drawing broad conclusions from that, however, is like declaring McDonalds the future leader of health food because they added a salad to their menu. It’s probably not very useful.

Or some grand divergence that proves some unfounded claim. You decide.

The Solved Problem of Thundering Herds

[EDIT: As if fate decided to slap me in the face, the strangest entry decided to get an enormous influx of traffic right as I was running an XCloner backup while updating the wording of an entry, trying to make myself sound like a bit less of a jerk. Egg on my face as this poor little Micro server did exactly what I crowed about it not doing…dying. Mumble mumble excuse diversion]

This blog runs on an Amazon AWS EC2 t1.micro instance (~600MB of RAM and a single, heavily throttled vCPU), an intentionally restrictive choice I made years ago in an effort to practice something I oft preach: That it is unacceptable that sites fall over and die under the slightest bit of attention, failing at what should be their moment of glory.

I’ve had a number of very heavy burst days since with absolutely no issues at all, if even a perceptible slowdown for users.

Last evening into today, for instance, saw a spike of traffic directed here from Hacker News, Reddit, and various Twitter referrals. In all there were some 50,000+ page impressions in the past 20 hours.

Spread out that is hardly impressive (under a page impression a second), however traffic was very bursty, with extended periods exceeding 60 page impressions per second.

That isn’t huge by any measure of the imagination, but this is exactly the same sort of situation that sees the dreaded “Database connection not available” error on so many sites.

Yet there were zero database connection errors. No error 500s. Everything running smoothly without the slightest hint of trouble. Absolutely nothing was stressed at all. A run of top usually showed top or sshd as the top consumer of resources.

On this miserably tiny server. Running WordPress!

The reason, of course, is caching. With W3 Total Cache the vast majority of requests are served as if they are static requests, with the smallest dynamic wrapper to match the request with the corresponding static resource. You could take it a step further and actually generate static file resources, eliminating anything dynamic above the instance of nginx or Apache, as I did with the Name Navigator, however that is optimizing the edges and can be an optimization too far.

The Thundering Herd problem is a solved issue at normal scales, without reactively firing up an army of AWS instances because you made it to the front page of a social news site. It is very unfortunate when sites die under marginal loads, the time of users wasted, and authors of interesting content deprived of their moment of exposure.

As an aside, one of the most rewarding experiences is when I see people who come here via posts on social news sites, but instead of simply bouncing back they continue on to read through various other pages. That it was interesting enough on a net of endless information is very gratifying.