Strive For Technical Debt

Technical debt is the differential between the solution you have in hand, and the idealized hindsight-refined vision of what it could be. This debt is lamented on programming boards everywhere as the folly of those who came before. Not only as the cheap critique of others, but we often criticize our former selves when we had to take shortcuts due to time constraints, help was unavailable, we didn’t entirely understand the platform or the requirements, tooling was limited, etc.

iman-1459322_640

It’s almost always viewed as a mistake that shouldn’t have happened, and with proper care and control we could learn and avoid it in the future. In the imaginary view of projects, we’d ideally build with the planning, time, knowledge and resources to never have technical debt in the first place.

But that’s nonsense. It is nonsense the world over, across almost every strata of projects. Code starts ugly, universally. It’s a basic truth of the industry.

There are no exceptions. If you live under delusions that your current solution isn’t ugly, be sure to revisit this assessment in a year.

And this idealized hindsight suffers from a severe anti-survivorship bias. The idealized “make-it-right-from-the-outset” projects that all failed and fell by the wayside years before get forgotten, essentially lost in time, but instead we remember the clunky duct-tape project that has run the organization for the past decade, hanging around long enough to be resented and belittled.

Technical debt almost always comes with success. It comes with a project becoming important enough that it matters, drawing attention and considerations and criticisms. It’s exactly those projects that are good enough that we find it hard to rationalize replacements (this is where dogma diverges from reality), and when we do the second-system effect is in full effect, our efforts to justify new projects leading us to overwrought, overly ambitious efforts.

Celebrate if you have technical debt to complain about. It usually means you’re sitting on something successful enough to matter.

Remember When The Suburbs Were Doomed?

Software-development focused online communities skew male, and generally younger (e.g. 20s to mid 30s). Most live in dense urban areas (the Bay, Seattle, NYC, London, etc), often in smallish apartments and condos. Few have families.

As is a side effect of the human condition, there is a tendency to cast one’s own lot as either the inevitable situation for most (e.g. people who have it different from me are living on borrowed time), or as if one’s personal situation is a more righteous, principled choice (better for the planet, development, futurology, etc. This is an observation I personally learned seeing myself do exactly these rationalizations over time).

Stories declaring or predicting the end of the suburbs always do well on these forums. I’ve seen poorly researched stories predicting this topping sites like Hacker News again and again: It tells the audience what they want to hear, so the burden of proof disappears.

But this assumption that suburbs are doomed has always struck me as counter-intuitive. While there has been a significant migration from rural areas to urban areas in virtually every country of the world (largely as rural workers are effectively displaced by factory farming or obsolete skillsets and it becomes a basic survival pattern), suburbs seem to be as bulging as they’ve ever been. In the United States, for instance, the fastest growing areas of the country have been medium density (e.g. suburbs and small cities). Very high density areas have actually been dropping much faster than even rural areas have.

The suburbs aren’t actually dying.

But maybe they will soon? The argument that they will is completely predicated on a single thing happening: We’re going to run short on oil, transportation costs are going to explode, and suddenly the onerous costs of living in far flung places is going to cause mass migration to the city centers, everyone defaulting on their giant suburban McMansion mortgage, the rings turned into a desolate wasteland.

Increasingly it seems like we’re more likely to keep oil in the ground than to run out. Alternatives to this easy energy source are picking up pace at an accelerating rate. As electric vehicles hit the mainstream, they’re becoming significantly more economically viable options for high mileage drivers (fuel for electric cars costs somewhere in the range of 1/5th per mile compared to gasoline, even in high cost areas like California). Where the miles are much cheaper from some solar panels or a windmill than they are a gallon of gasoline, even at the current depressed prices. And that’s excluding the significant mechanical costs of internal combustion engines that would soon be dramatically undermined by mass produced electric vehicles.

You can go much further for less than ever before, with the specter of oil’s decline being less and less relevant. If anything transportation is going to get a lot cheaper.

Of course the commute itself has always been a tax on life, and personally I can say that I quit jobs after doing the grueling big city commute. Only we’re on the cusp of our car doing the driving for us. Very soon the drive will be quality time to catch an episode of a good show, or maybe a quick nap. The capacity of even existing roadways will dramatically increase once you remove human failure and foible.

Connectivity…well everyone everywhere is connected, incredibly inexpensively. When I was a kid we had to pay $0.30/minute+ to talk to people 20km away. Now continent wide calling is free. Internet connectivity is fast and copious almost anywhere. Many of us work remote jobs and it really doesn’t matter where we are.

We’re virtual.

I’m not making any predictions or judgments, but the inputs to the classic assumptions has changed enormously. I recently was entertaining the idea of living even more remote (right now I work at home in a rural area exurb of Toronto — this doesn’t qualify as even suburbs — but there are of course far more remote areas of this country), and it’s incredible how few of the factors are really compromised anymore: I’d still have 100s of channels, high speed internet, 24/7 essentially free communications (text, audio, video, soon enough 360 vision at some point with depth) with every family member, overnight delivery of Amazon Prime packages, etc.

Being in a less dense area just isn’t the compromise it once was. And that’s talking about fully rural areas.

The suburbs — where you still have big grocery stores and bowling alleys and neighborhood bars and all of the normal accouterments of living — just aren’t much of a compromise at all. When someone talks up the death of the suburbs, I marvel at the 1980s evaluations of the 2010s world. I would argue the contrary: a few communicable disease outbreaks (e.g. SARS v2) and humans will scurry from density.

Facebook Instant Articles

Both Google and Facebook introduced their own lightweight HTML subsets: AMP and Instant Articles, respectively. I mentioned AMP on here previously, and via an official WordPress plugin it’s accessible by simply appending /amp on any page’s URL. Both impose a restrictive environment that limit the scope of web technologies that you can use on your page, allowing for fewer/smaller downloads and less CPU churn.

The elevator pitch for Facebook’s Instant Articles is an absolutely monster, bringing an i5-4460 to its knees by the time the page had been scrolled down. There’s a bit of an irony in the pitch for a lightweight, fast subset of HTML being a monstrous, overwrought, beastly page (the irrelevant background video thing is an overused, battery sucking pig that was never useful and is just boorish, lazy bedazzling).

I’m a user of Facebook, with my primary use being news aggregation. As many content providers all herded in line to the platform, I stopped visiting their sites and simply do a cursory browse of Facebook periodically: BBC, CBC, The Star, Anandtech, The Verge, Polygon, Cracked, various NFL related things, and on and on. On the whole I would wager that these sites engaged in a bit of a Tragedy of the Commons racing to the Facebook fold, though at some point the critical mass was hit and it became necessary to continue getting eyeballs.

The web is ghetto-ized and controlled.

More recently Facebook started corralling mobile users to their own embedded browser (for the obvious overwatch benefits). And now they’re pushing publishers to Instant Articles.

But the transition isn’t clean. Many sites are losing most embedded content (Twitter embeds, social media). Lots of pages are pulling up, curiously, malformed XML errors. It is accidentally acting as an ad blocker on many web properties, the sites unintentionally squashing their own ads, filtered out as non-Instant Article compliant.

It’s interesting how quietly this is all happening. This once would make for pretty big tech news (especially Facebook embedding the browser). Now it’s just a quiet transition.

A Decade of Being the World’s Pre-Eminent Domainologist

It’s been 10 years since I had a brief moment of glory1 with the whole domain names hoopla. At that time I had posted a lazy analysis of the root .COM dump, and it satisfied a lot of people’s curiosity. It was linked widely, eventually leading to an NPR radio interview, a front-page article in the Wall Street Journal (okay…front of the second section), and a variety of radio and news interviews.

It was an educational experience for me because I had been doing the blogging thing for a while, largely deeply technical pieces, seeing little success and no uptake2. Then I posted something that seemed like a bit of superficial fluff and saw enormous success (from a blogging sense, if readers and links are the measure of worth). I still see several dozen visitors a day coming from random links around the tubes to those old domain name entries.

I posted a brief follow-up to it then. After the attention, countless others tried to grab some of that magic, an industry of data scientists suddenly aware that you could access the zone files (there’s also the CZDS for the wide plethora of new TLDs).

As mentioned then, there was really no personal advantage in it for me.

I don’t do anything professionally related to domain names (beyond, of course, operating on the web, but that applies to almost everyone now), so I couldn’t spin it into some sort of product or service.

Career wise it was a wash: I was working at a small financial services company and it was a bit of an ego boost when the partners were excited to see their name in the Wall Street Journal. That was apparently quite a coup, and they got calls from various clients who all started their day with the WSJ. Which was just random chance, as I had literally just started working with them after a period doing the consulting thing (I’ve gone back and forth during my career, the demands of family life dictating changes of positions).

But it was fun, which was all I really intended from it. Still have a couple of copies of the WSJ issue in a drawer somewhere. Can’t believe it’s been ten whole years.

1 – I had some brief exposure for promoting SVG in a period when it was floundering: I managed to convince Microsoft to run it in an issue of their MSDN Magazine, which led to quite a lot of chatter and excitement about the technology, many seeing this as Microsoft endorsement (at the time the company was pushing the competing VML, and Flash seemed to be strangling SVG). I also proposed a change to Firefox that made it much more competitive in artificial benchmarks and some real world situations. Those are my pretty middling claims to fame, as most of my career has been doing hidden stuff for a shadowy cabal of companies. The little public bits of exposure were a delightful change.

2 – One of the things about “blogging” that I noted early on is that you have to have a niche or common perspective to pander to a crowd. Be a C# guy talking about C#, and why C# and .NET is the greatest. A Go guy talking about Go. An Apple booster heralding Apple’s choices, and criticizing opponents consistently and predictably. Like politics in technology, you need to align with a group, pandering to their beliefs, and they’ll carry you on their shoulders.

But technology is seldom so narrow, and few choices aren’t a perilous mix of pros and cons.

If you don’t stick to a niche you need to make “easy listening” sorts of articles, which the DNS entry satisfied (which has the added advantage that they’re dramatically easier to write).

Alternately — and the best option of all — just be a really good writer making great content. I don’t satisfy that requirement, so I’m somewhere between niche and easy listening.

Provably Fair / Gaming

A little bit of a diversion today, but just wanted to belatedly post a bit of commentary on the whole recent game/virtual item gambling controversy.

EDIT: 2016-07-14 – Shortly after I posted this, Valve announced that they were going to start shutting off third party API access if it’s used for gambling (no I’m not claiming they did this as a result of me posting, but rather just noting why I didn’t mention this rather big development below). This morning Twitch essentially also banned CS:GO item gambling (though they’re trying to avoid any admission of guilt or complicity by simply deferring to Valve’s user agreements).


Like all of you, work and family demands leave little time for gaming. One of the few games I enjoy — one that allows for short duration drop-in sessions and has been a worthwhile mental diversion when dealing with difficult coding problems — is Counter-Strike Global Offense (CS:GO).

The game is a classic twitch shooter. It has a very limited, curated set of weapons, and most rounds are played on a limited number of proven maps.

I’m a decent player (though it was the first game where I really had that “I’m too old for this” sense, with my eleven year old son absolutely dominating me). It’s a fun, cathartic diversion.

Nine games out of ten I end up muting every other player as the player base is largely adolescent, and many really want to be heard droning on. The worst players seem to be the most opinionated, so with every match the guy sitting at the bottom in points and frags always has the most to say about the failure of everyone else’s gameplay (this is an observation that holds across many industries, including software development. This industry is full of people who’ve created nothing and achieved little explaining why everyone else is doing it wrong).

The CS:GO community also has an enormous gambling problem, as you may have heard. This came to a head when a pair of popular YouTubers were outed as owners of a CS:GO skin gambling site. These two had posted a number of arguable “get rich….quick!” type videos demonstrating very highly improbable success, enticing their legions of child fans to follow in their possibly rigged footsteps.

Skins, to explain, are nothing more than textures you apply to weapons. The game often yields situations where other players spectate your play, and having unique and less common skins is desirable as a status thing. So much so that there is a multi-billion dollar market of textures that people will pay hundreds of dollars for (Steam operates a complex, very active marketplace to ensure liquidity).

The whole thing is just dirty and gross, with Valve sitting at the center of an enormous gambling empire mostly exploiting children all spending those birthday gift cards. It casts a shadow over the entire game, and those awaiting Half Life 3 will probably wait forever, as Valve seems to be distracted into only working on IP that features crates and keys.

The machinations of crates and keys, winning rewards that Valve provides a marketplace denominated in real currencies, is gambling: if you’re paying real money for small odds of something worth more money (again, Valve provides the marketplace and helpfully assigns the real-world value), it’s a matter of time before the hammer falls hard on these activities. Valve is operating in a very gray area, and they deserve some serious regulatory scrutiny.

Anyways, while being entertained by that whole sordid ordeal, the topic of “fair” online gambling came up. From this comes the term “provably fair”, which is a way that many gambling enterprises add legitimacy to what otherwise might be a hard gamble to stomach.

It’s one thing to gamble on a physical roulette wheel, but at least you know the odds (assuming the physics of the wheel haven’t been rigged…). It’s quite another to gamble on an online roulette wheel where your odds of winning may actually be 0%.

“You bet black 28….so my `random’ generator now picks red 12…”

So the premise of provably fair came along. With it you can generally have some assurance that the game is fair. For instance for the roulette wheel the site might tell you in advance that the upcoming wheel roll — game 1207 — has the SHA1 hash of 4e0fe833734a75d6526b30bc3b3620d12799fbab. After the game it reveals that the hashed string was “roaJrPVDRx – GAME 1207 – 2016-07-13 11:00AM – BLACK 26” and you can confirm that it hashes and that the spin didn’t change outcomes based upon your bet.

That’s provably fair. It still doesn’t mean that the site will ever actually payout, or that that they can’t simply claim you bet on something different, but the premise is some sort of transparency is available. With a weak hash (e.g. don’t use SHA1. That was demonstrative) or a limited entropy checked string it might allow players to actually hack the game. To know the future before the future.

You can find provably fair defined on Wikipedia, where the definition is suspect, seemingly posted by someone misusing it and being called on it (“it is susceptible to unscrupulous players or competitors who can claim that the service operator cheats” What?)

Anyways, the world of CS:GO gambling is a bit interesting to evaluate the understanding of the term provably fair.

csgolotto, the site at the center of all of the hoopla, does little to even pretend to be provably fair. Each of their games randomly generate a percentage value and then a hash with the value and a nonce is provided, but that does nothing to assure fairness: For the duels the player chooses a side. If the predetermined roll — which an insider would obviously easily know — was below 50, someone with insider knowledge could simply choose the below 50 side, and vice versa. Small betting differences slightly change the balance, but it has no apparent guards against insider abuse, and it’s incredible that anyone trusted these sites.

The pool/jackpot game relies upon a percentage being computed for a game — say 66.666666% — and then as players enter they buy stacked “tickets”, the count depending upon the value of their entries. So player 1 might have tickets 1-100, player 2 tickets 101-150, and player 3 tickets 151-220. The round expires and the 66.6666% ticket is #146, so player 2 wins the pot.

A variety of other CS GO gambling sites1 use the same premise. There is nothing provably fair about it. If an insider knows that a given jackpot win percentage is 86%, it is a trivial exercise to compute exactly how many tickets to “buy” to take the pot, at the right time, with the technical ability to ensure the final entry. It is obvious when to bow out of a given pool.

Some sites have tried to mix this up further, but to a tee each one was easily exploitable by anyone with insider knowledge.

There is nothing provably fair about it.

1 – I had a couple of illustrative examples of extra dubious claims of “provably fair”, including a site that added hand-rigged cryptography that actually made it even less fair for players. Under the scrutiny and bright lights, a lot of these sites seem to have scurried into the dark corners, shutting down and removing themselves entirely from Google Search.

We Must Stop Jealously Citing the Dunning-Kruger Effect

What the Dunning-Kruger study actually demonstrated: Among 65 Cornell undergrads (ergo, a pretty selective, smart bunch to start, likely accustomed to comparing well), the “worst” performing thought their performance would be slightly above the average of the group, while the best performing thought their performance would be highest of all. The average performing thought their performance would also be above average.

The participants had no measure to compare against each other, but from a general perspective were likely, to an individual, far above the normal population average. They also had the difficult task of ranking not by actual performance, but by percentile: It was a situation where one could score 95 out of 100 on a difficult assignment and still end up in at the bottom of the percentile ranking. As a group that shared enormous commonalities (same academic background, life situation, all getting into an exclusive school), there is no surprise that self-evaluations compressed towards the center.

What many in this industry endlessly think the Dunning-Kruger study demonstrated: People who think their performance is above average must actually be below average, and the people who think they are average or below must actually be above average (the speaker almost always slyly promoting their own humility as a demonstration of their superiority, in a bit of an ironic twist. Most rhetoric is self-serving). The shallow meme is that people with confidence in their abilities must actually be incompetent…Dunning-Kruger and all.

Cheap rhetoric turns cringe worthy when it’s cited to pull down others. Do a search for Dunning-Kruger cites on developer forums or blogs and you’ll find an endless series of transparent attempts to pitch why the speaker is better than anyone else.

No one has ever gained confidence, position or ranking by projecting some myth that they think undermines the people who have it. It just makes the speaker look sad. The same thing can be seen in competitive online gaming like CS:GO where everyone better than the speaker is a hacker/spends too much time playing the game, and everyone worse is just naturally less skilled and should delete the game they’re so terrible. It’s good for a laugh, at least until it gains truth through repeated assertion.

This is one of those posts that if you’re a blogger isn’t a good way to grow subscribers: Invariable there are some readers who spend their professional life calling everyone “below” them hacks, and everyone “above” them hacks who suffer the Dunning-Kruger effect. It’s common on any developer related forum. Eh. Thankfully I don’t care about reader numbers.

The Reports of HTML’s Death Have Been Greatly Exaggerated…?

Feedback

Yesterday’s post titled “Android Instant Apps / The Slow, Inexorable Death of HTML” surprisingly accumulated some 35,000 or so uniques in a few hours. It has yielded feedback containing recurring sentiments that are worth addressing.

it is weird the article trying to sell the idea that apps are better posted XKCD images stating otherwise

While there are situations where a native app can certainly do things that a web app can’t, and there are some things it can simply do better, the prior entry wasn’t trying to “sell” the idea that apps are inherently better (and I have advocated the opposite on here and professionally for years where the situation merits). It was simply an observation of Google’s recent initiative, and what the likely outcome will be.

Which segues to another sentiment-

The reverse is happening. Hybrid apps are growing in number. CSS/JS is becoming slicker than ever.

The web is already a universal platform, so why the ████ would you code a little bit of Java for Android instead of writing it once for everything?

In the prior entry I mentioned that some mobile websites are growing worse. The cause of this decline isn’t that HTML5/JS/CSS or the related stack is somehow rusting. Instead it’s that many of these sites are so committed to getting you into their native app that they’ll sabotage their web property for the cause.

No, I don’t want to install your app. Seriously.

Add that the mobile web has seen a huge upsurge in advertising dark patterns. The sort of nonsense that has mostly disappeared from the desktop web, courtesy of the nuclear threat of ad blockers. Given that many on the mobile web don’t utilize these tools, the domain is rife with endless redirects, popovers, the intentionally delayed page re-flows to encourage errant clicks (a strategy that is purely self-destructive in the longer term, as every user will simply hit back, undermining the CPC), overriding swipe behaviors, making all background space an ad click, and so on.

The technology of the mobile web is top notch, but the implementation is an absolute garbage dump across many web properties.

So you have an endless list of web properties that desperately want you to install their app (which they already developed, often in duplicate, triplicate…this isn’t a new thing), and who are fully willing to make your web experience miserable. Now offer them the ability to essentially force parts of that app on the user.

The uptake rate is going to be incredibly high. It is going to become prevalent. And with it, the treatment of the remaining mobile webfugees is going to grow worse.

On Stickiness

I think it’s pretty cool to see a post get moderate success, and enjoy the exposure. One of the trends that has changed in the world of the web, though, is in the reduced stickiness of visitors.

A decade or so ago, getting a front page on Slashdot — I managed it a few times in its hey-day — would yield visitors who would browse around the site often for hours on end, subscribe to the RSS feed, etc. It was a very sticky success, and the benefits echoed long after the initial exposure died down. A part of the reason is that there simply wasn’t a lot of content, so you couldn’t just refresh Slashdot and browse to the next 10 stories while avoiding work.

Having a few HN and Reddit success stories over the past while I’ve noticed a very different pattern. People pop on and read a piece, their time on site equaling the time to read to the end, and then they leave. I would say less than 0.5% look at any other page.

There is no stickiness. When the exposure dies down, it’s as if it didn’t happen at all.

Observing my own uses, this is exactly how I use the web now: I jump to various programming forums, visiting the various papers and entries and posts, and then I click back. I never really notice the author, I don’t bookmark their site, and I don’t subscribe to their feed. The rationale is that when they have another interesting post, maybe it’ll appear on the sites I visit.


This is just the new norm. It’s not good or bad, but it’s the way we utilize a constant flow of information. The group will select and filter for us.

While that’s a not very interesting observation, I should justify those paragraphs: I believe this is the cause of both the growing utilization of dark patterns on the web (essentially you’re to be exploited as much as possible during the brief moment they have your attention, and the truth is you probably won’t even remember the site that tricked you into clicking six ads and sent you on a vicious loop of redirects), and the desperation to install their app where they think they’ll gain a more permanent space in your digital world.

Android Instant Apps / The Slow, Inexorable Death of HTML

Android Instant Apps were announced at the recent Google I/O. Based upon available information1, Instant Apps offer the ability for websites to instead transparently open as a specific activity/context in an Android App, the device downloading the relevant app modules (e.g. the specific fragments and activities necessary for a need) on demand, modularized for only what the context needs.

The querystring app execution functionality already exists in Android. If you have the IMDB app, for instance, and open an IMDB URL, you will find yourself in the native app, often without prompting: from the Google Search app it is automatic, although on third party sites it will prompt whether you want to use the app or not, offering to always use the association.

www.imdb.com/title/tt0472954/

Click on that link in newer versions of Android (in a rendering agent that leverages the standard launch intents), with IMDB installed, and you’ll be brought to the relevant page in that app.

Instant Apps presumably entail a couple of basic changes-

  • Instead of devices individually having a list of app links (e.g. “I have apps installed that registered for the IMDB, Food Network and Buzzfeed domains, so keep an eye out for ACTION_VIEW intents for any of the respective domains“), there will be a Google-managed master list that will be consulted and likely downloaded/cached regularly. These link matches may be refined to a URL subset (where the current functionality is for a full domain).
  • An update to Android Studio / the build platform will introduce more granular artifact analysis/dependency slicing. Already this exists in that an APK is a ZIP of the various binary dependencies (e.g. for each target processor if you’re using the NDK), resources, and so on, however presumably the activities, classes and compiled resources will be bifurcated, their dependencies documented.
  • When you open a link covered by the master list, the device will check for the relevant app installed. If it isn’t found, it will download the necessary dependencies, cache them in some space-capped instant app area, initialize a staged environment area, and then launch the app.

They promise support, via Google Play Services, all the ways back to Android 4.1 (Jellybean), which encompasses 95.7% of active users. Of course individual apps and their activities may use functionality leveraging newer SDKs, and may mandate it as a minimum, so this doesn’t mean that all instant apps will work on all 95.7% of devices.

 

 

The examples given include opening links from a messaging conversation, and from the Google Search app (which is a native implementation, having little to do with HTML).

The system will certainly provide a configuration point allowing a device to opt out of this behavior, but it clearly will become the norm. Google has detailed some enhanced  restrictions on the sandbox of such an instant app — no device identification or services, for instance — but otherwise it utilizes the on-demand permission model and all of the existing APIs like a normal app (detailed here. As is always the case, those who don’t understand this are fear mongering about it being a security nightmare, just as when auto app-updates were rolled out there were a number of can you say bricked? responses).

And to clear up a common misconception, these apps are not run “in the cloud”, with some articles implying that they’re VNC sessions or the like. Aside from some download reductions for the “instant” scenario (Instant Apps are apparently capped at 4MB for a given set of functionality, and it’s tough to understand how the rest of the B&H app fills it out to 37MB), the real change is that you’re no longer asked — the app is essentially forced on you by default — and it doesn’t occupy an icon on your home screen or app drawer. It also can’t launch background services, which is a bonus.

Unfortunately, the examples given demonstrate little benefit over the shared-platform HTML web — the BuzzFeed example is a vertical list of videos, while the B&H example’s single native benefit was Android Pay — though there are many scenarios where the native platform can admittedly provide an improved, more integrated and richer experience.

It further cements the HTML web as a second class citizen (these are all web service powered, so simply saying “the web” seems dubious). I would cynically suggest that the primary motivation for this move is the increased adoption of ad blockers on the mobile HTML web: It’s a much more difficult proposition to block ads within native apps, while adding uBlock to the Firefox mobile browser is trivial, and is increasingly becoming necessary due to the abusive, race-to-the-bottom behaviors becoming prevalent.

And it will be approximately one day before activities that recognize they’re running as instant apps start endlessly begging users to install the full app.

Ultimately I don’t think this is some big strategic shift, and such analyses are usually nonsensical. But it’s to be seen what the impact will be. Already many sites treat their mobile HTML visitors abusively: one of the advocacy articles heralding this move argued that it’s great because look at how terrible the Yelp website has become, which is a bit of a vicious cycle. If Yelp can soon lean on a situation where a significant percentage of users will automatically find themselves in the app, their motivations for presenting a decent web property decline even further.

1 – I have no inside knowledge of this release, and of course I might be wrong in some of the details. But I’m not wrong. Based upon how the platform is implemented, and the functionality demonstrated, I’m quite confident my guesses are correct.

Achieving a Perfect SSL Labs Score with C(++)

A good article making the rounds details how to achieve a perfect SSL Labs Score with Go. In the related discussion (also on reddit) many noted that such a pursuit was impractical: if you’re causing connectivity issues for some of your users, achieving minor improvements in theoretical security might be Pyrrhic.

A perfect score is not a productive pursuit for most public web properties, and an A+ with a couple of 90s is perfectly adequate and very robustly secure for most scenarios.

Striving for 100 across the board is nonetheless an interesting, educational exercise. The Qualys people have done a remarkable job educating and informing, increasing the prevalence of best practice configurations, improving the average across the industry. It’s worth understanding the nuances of such an exercise even if not practically applicable for all situations.

It’s also worth considering that not all web endpoints are publicly consumable, and there are scenarios where cutting off less secure clients is an entirely rational choice. If your industrial endpoint is called from your industrial management process, it really doesn’t matter whether Android 2.2 or IE 6 users are incompatible.

score

So here’s how to create a trivial implementation of a perfect score HTTPS endpoint in C(++). It’s more wordy than the Go variant, though it’s a trivial exercise to parameterize and componentize for easy reuse. And as anyone who visits here regularly knows, in no universe am I advocating creating HTTPS endpoints in C++: I’m a big fan and (ab)user of Go, C#, Java, and various other languages and platforms, but it’s nice to have the options available when appropriate.

This was all done on a Ubuntu 16.04 machine with the typical build tools installed (e.g. make, git, build-essential, autoconf), though of course you could do it on most Linux variants, OSX, Ubuntu on Windows, etc. This exercise presumes that you have certificates available at /etc/letsencrypt/live/example.com/

(where example.com is replaced with your domain. Replace in code as appropriate, or make arguments)

Note that if you use the default letsencrypt certificates, which are currently 2048 bits, the SSL Test will still yield an A+ from the below code however it will yield a slightly imperfect score, with only a score of 90 for the key exchange. In practice a 2048-bit cert is considered more than adequate, so whether you sweat this and update to a 4096-bit cert is up to you (as mentioned in the Go entry, you can obtain a 4096-bit cert via the lego Go app, using the

--key-type "rsa4096"

argument).

1 – Install openssl and the openssl development library.

sudo apt-get update && sudo apt-get install openssl libssl-dev

2 – Create a DH param file. This is used by the OpenSSL for the DH key exchange.

sudo openssl dhparam -out /etc/letsencrypt/live/example.com/dh_param_2048.pem 2048

3 – Download, make, install the libevent v2.1.5 “beta”. Install as root and refresh the library cache (e.g. sudo ldconfig).

https://github.com/libevent/libevent/releases/tag/release-2.1.5-beta

4 – Start a new C++ application linked to libcrypto, libevent, libevent_openssl, libevent_pthreads and libssl.

5 – Add the necessary includes-

#include <iostream>
#include <openssl/ssl.h>
#include <openssl/err.h>
#include <openssl/rand.h>
#include <openssl/stack.h>

#include <event.h>
#include <event2/listener.h>
#include <event2/bufferevent_ssl.h>
#include <evhttp.h>

6 – Initialize the SSL context-

SSL_CTX *
ssl_init(void) {
    SSL_CTX *server_ctx;

    SSL_load_error_strings();
    SSL_library_init();

    if (!RAND_poll())
        return nullptr;

    server_ctx = SSL_CTX_new(SSLv23_server_method());

    // Load our certificates
    if (!SSL_CTX_use_certificate_chain_file(server_ctx, "/etc/letsencrypt/live/example.com/fullchain.pem") ||
            !SSL_CTX_use_PrivateKey_file(server_ctx, "/etc/letsencrypt/live/example.com/privkey.pem", SSL_FILETYPE_PEM)) {
        std::cerr << "Couldn't read chain or private key" << std::endl;
        return nullptr;
    }

    // prepare the PFS context
    EC_KEY *ecdh = EC_KEY_new_by_curve_name(NID_secp384r1);
    if (!ecdh) return nullptr;
    
    if (SSL_CTX_set_tmp_ecdh(server_ctx, ecdh) != 1) {
        return nullptr;
    }

    bool pfsEnabled = false;
    FILE *paramFile = fopen("/etc/letsencrypt/live/example.com/dh_param_2048.pem", "r");
    if (paramFile) {
        DH *dh2048 = PEM_read_DHparams(paramFile, NULL, NULL, NULL);
        if (dh2048 != NULL) {
            if (SSL_CTX_set_tmp_dh(server_ctx, dh2048) == 1) {
                pfsEnabled = true;
            }
        }
        fclose(paramFile);
    }

    if (!pfsEnabled) {
        std::cerr << "Couldn't enable PFS. Validate DH Param file." << std::endl;
        return nullptr;
    }
    
    SSL_CTX_set_options(server_ctx,
            SSL_OP_SINGLE_DH_USE |
            SSL_OP_SINGLE_ECDH_USE |
            SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1);

    if (SSL_CTX_set_cipher_list(server_ctx, "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:AES256:!DHE:!RSA:!AES128:!RC4:!DES:!3DES:!DSS:!SRP:!PSK:!EXP:!MD5:!LOW:!aNULL:!eNULL") != 1) {
        std::cerr << "Cipher list could not be initialized." << std::endl;
        return nullptr;
    }

    return server_ctx;
}

The most notable aspects are the setup of PFS, including a strong, 384-bit elliptic curve. Additionally, deprecated transport options are disabled (in this case anything under TLSv1.2), as are weak ciphers.

ciphers

7 – Prepare a libevent callback that attaches a new SSL connection to each libevent connection-

struct bufferevent* initializeConnectionSSL(struct event_base *base, void *arg) {
    return bufferevent_openssl_socket_new(base,
            -1,
            SSL_new((SSL_CTX *)arg),
            BUFFEREVENT_SSL_ACCEPTING,
            BEV_OPT_CLOSE_ON_FREE);
}

8 – Hook it all together-

int main(int argc, char** argv) {
    SSL_CTX *ctx;
    ctx = ssl_init();
    if (ctx == nullptr) {
        std::cerr << "Failed to initialize SSL. Check certificate files." << std::endl;
        return EXIT_FAILURE;
    }

    if (!event_init()) {
        std::cerr << "Failed to init libevent." << std::endl; 
        return EXIT_FAILURE; 
    } 
    auto base = event_base_new(); 
    auto https = evhttp_new(base); 

    void (*requestHandler)(evhttp_request *req, void *) = [] (evhttp_request *req, void *) { 
         auto *outBuf = evhttp_request_get_output_buffer(req); 
         if (!outBuf) return; 
         switch (req->type) {
            case EVHTTP_REQ_GET:
                {
                    auto headers = evhttp_request_get_output_headers(req);
                    evhttp_add_header(headers, "Strict-Transport-Security", "max-age=63072000; includeSubDomains");
                    evbuffer_add_printf(outBuf, "<html><body><center><h1>Request for - %s</h1></center></body></html>", req->uri);
                    evhttp_send_reply(req, HTTP_OK, "", outBuf);
                }
                break;
            default:
                evhttp_send_reply(req, HTTP_BADMETHOD, "", nullptr);
                break;
        }
    };

    // add the callbacks
    evhttp_set_bevcb(https, initializeConnectionSSL, ctx);
    evhttp_set_gencb(https, requestHandler, nullptr);
    auto https_handle = evhttp_bind_socket_with_handle(https, "0.0.0.0", 443);

    event_base_dispatch(base);

    if (event_dispatch() == -1) {
        std::cerr << "Failed to run message loop." << std::endl;
        return EXIT_FAILURE;
    }

    return 0;
}

Should you strive for 100? Maybe not. Should you even have SSL termination in your C(++) apps?  Maybe not (terminate with something like nginx and you can take advantage of all of the modules available, including compression, rate limiting, easy resource ACLs, etc). But it is a tool at your disposal if the situation is appropriate. And of course the above is quickly hacked together, non-production ready sample code (with some small changes it can be made more scalable, achieving enormous performance levels on commodity servers), so use at your own risk.

Just another fun exercise. The lightweight version of this page can be found at https://dennisforbes.ca/index.php/2016/05/23/achieving-a-perfect-ssl-labs-score-with-c/amp/, per “Hanging Chads / New Projects / AMPlified“.

Note that this is not the promised “Adding Secure, Authenticated HTTPS Interop to a C(++) Project” piece that is still in work.  That undertaking is more involved with secure authentication and authorization, custom certificate authorities, and client certificates.

Disappearing Posts / Financing / Rust

1984

While in negotiations I have removed a few older posts temporarily. The “Adding Secure, Authenticated HTTPS Interop to a C(++) Project” series, for instance.

I can’t make the time to focus on it at the moment and don’t want it to sit like a bad promise while the conclusion awaits (and for technical pieces I really try to ensure 100% accuracy which is time consuming), and will republish when I can finish it. I note this given a few comments where helpful readers thought some sort of data corruption or transactional rollback was afoot. All is good.

Rust

Occasionally I write things on here that lead some to inaccurately extrapolate more about my position. In a recent post, for instance, I noted that Rust (the system language) seems to be used more for advocacy — particularly of the “my big brother is tougher than your big brother” anti-Go sort — than in creating actual solutions.

chain-109302_640
This wasn’t a criticism of Rust. So I was a bit surprising when I was asked to write a “Why Go demolishes Rust” article (paraphrasing, but that was the intent) for a technical magazine.

I don’t think Go demolishes Rust. Rust is actually a very exciting, well considered, modern language. It’s a bit young at the moment, but has gotten over the rapid changes that occurred earlier in its lifecycle.

Language tourism is a great pursuit for all developers. Not only do we learn new tools that might be useful in our pursuits, at a minimum we’ll look at the languages we do use and leverage daily in a different way, often learning and understanding their design compromises and benefits through comparison.

I would absolutely recommend that everyone give Rust a spin. The tutorials are very simple, the feedback fast and rewarding.

Selling Abilities

When selling oneself, particularly in an entrepreneurial effort where you’re the foundation of the exercise and your abilities are key, you can’t leverage social lies like contrived self-deprecation or restraint. It’s pretty much a given that you have to be assertive and confident in your abilities, because that’s ultimately what you’re selling to people.

This doesn’t mean claims of infallibility. Instead that you have a good understanding of what you are capable of doing based upon empirical evidence, and are willing and hoping to be challenged on it.

A few days ago I had to literally search whether Java passes array members by reference or value (it was a long day of jumping between a half dozen languages and platforms). I’m certainly fallible. Yet I am fully confident that I can quickly architect and/or build an excellent implementation of a solution to almost any problem. Because that’s what my past has demonstrated.

Generally that goes well. Every now and then, however, I’ve encountered someone who is so offended by pitch confidence that, without bothering to know a thing about me, my accomplishments, or taking me up on my open offer to demonstrate it, they respond negatively or dismissively. This seems to be particularly true among Canadians (I am, of course, a Canadian. This country has a very widely subscribed to crab mentality, however, with a “who do you think you are?” sort of natural reaction among many). Not all Canadians by any measure, but enough that it becomes notable when you regularly deal with people from other countries and start to notice that stark difference.