Provably Fair / Gaming

A little bit of a diversion today, but just wanted to belatedly post a bit of commentary on the whole recent game/virtual item gambling controversy.

EDIT: 2016-07-14 – Shortly after I posted this, Valve announced that they were going to start shutting off third party API access if it’s used for gambling. This morning Twitch essentially also banned CS:GO item gambling (though they’re trying to avoid any admission of guilt or complicity by simply deferring to Valve’s user agreements).


Like all of you, work and family demands leave little time for gaming. One of the few games I enjoy — one that allows for short duration drop-in sessions and has been a worthwhile mental diversion when dealing with difficult coding problems — is Counter-Strike Global Offense (CS:GO).

The game is a classic twitch shooter. It has a very limited, curated set of weapons, and most rounds are played on a limited number of proven maps.

I’m a decent player (though it was the first game where I really had that “I’m too old for this” sense, with my eleven year old son absolutely dominating me). It’s a fun, cathartic diversion.

Nine games out of ten I end up muting every other player as the player base is largely adolescent, and many really want to be heard droning on. The worst players seem to be the most opinionated, so with every match the guy sitting at the bottom in points and frags always has the most to say about the failure of everyone else’s gameplay (this is an observation that holds across many industries, including software development. This industry is full of people who’ve created nothing and achieved little explaining why everyone else is doing it wrong).

The CS:GO community also has an enormous gambling problem, as you may have heard. This came to a head when a pair of popular YouTubers were outed as owners of a CS:GO skin gambling site. These two had posted a number of arguable “get rich….quick!” type videos demonstrating very highly improbable success, enticing their legions of child fans to follow in their possibly rigged footsteps.

Skins, to explain, are nothing more than textures you apply to weapons. The game often yields situations where other players spectate your play, and having unique and less common skins is desirable as a status thing. So much so that there is a multi-billion dollar market of textures that people will pay hundreds of dollars for (Steam operates a complex, very active marketplace to ensure liquidity).

The whole thing is just dirty and gross, with Valve sitting at the center of an enormous gambling empire mostly exploiting children all spending those birthday gift cards. It casts a shadow over the entire game, and those awaiting Half Life 3 will probably wait forever, as Valve seems to be distracted into only working on IP that features crates and keys.

The machinations of crates and keys, winning rewards that Valve provides a marketplace denominated in real currencies, is gambling: if you’re paying real money for small odds of something worth more money (again, Valve provides the marketplace and helpfully assigns the real-world value), it’s a matter of time before the hammer falls hard on these activities. Valve is operating in a very gray area, and they deserve some serious regulatory scrutiny.

Anyways, while being entertained by that whole sordid ordeal, the topic of “fair” online gambling came up. From this comes the term “provably fair”, which is a way that many gambling enterprises add legitimacy to what otherwise might be a hard gamble to stomach.

It’s one thing to gamble on a physical roulette wheel, but at least you know the odds (assuming the physics of the wheel haven’t been rigged…). It’s quite another to gamble on an online roulette wheel where your odds of winning may actually be 0%.

“You bet black 28….so my `random’ generator now picks red 12…”

So the premise of provably fair came along. With it you can generally have some assurance that the game is fair. For instance for the roulette wheel the site might tell you in advance that the upcoming wheel roll — game 1207 — has the SHA1 hash of 4e0fe833734a75d6526b30bc3b3620d12799fbab. After the game it reveals that the hashed string was “roaJrPVDRx – GAME 1207 – 2016-07-13 11:00AM – BLACK 26” and you can confirm that it hashes and that the spin didn’t change outcomes based upon your bet.

That’s provably fair. It still doesn’t mean that the site will ever actually payout, or that that they can’t simply claim you bet on something different, but the premise is some sort of transparency is available. With a weak hash (e.g. don’t use SHA1. That was demonstrative) or a limited entropy checked string it might allow players to actually hack the game. To know the future before the future.

You can find provably fair defined on Wikipedia, where the definition is suspect, seemingly posted by someone misusing it and being called on it (“it is susceptible to unscrupulous players or competitors who can claim that the service operator cheats” What?)

Anyways, the world of CS:GO gambling is a bit interesting to evaluate the understanding of the term provably fair.

csgolotto, the site at the center of all of the hoopla, does little to even pretend to be provably fair. Each of their games randomly generate a percentage value and then a hash with the value and a nonce is provided, but that does nothing to assure fairness: For the duels the player chooses a side. If the predetermined roll — which an insider would obviously easily know — was below 50, someone with insider knowledge could simply choose the below 50 side, and vice versa. Small betting differences slightly change the balance, but it has no apparent guards against insider abuse, and it’s incredible that anyone trusted these sites.

The pool/jackpot game relies upon a percentage being computed for a game — say 66.666666% — and then as players enter they buy stacked “tickets”, the count depending upon the value of their entries. So player 1 might have tickets 1-100, player 2 tickets 101-150, and player 3 tickets 151-220. The round expires and the 66.6666% ticket is #146, so player 2 wins the pot.

A variety of other CS GO gambling sites1 use the same premise. There is nothing provably fair about it. If an insider knows that a given jackpot win percentage is 86%, it is a trivial exercise to compute exactly how many tickets to “buy” to take the pot, at the right time, with the technical ability to ensure the final entry. It is obvious when to bow out of a given pool.

Some sites have tried to mix this up further, but to a tee each one was easily exploitable by anyone with insider knowledge.

There is nothing provably fair about it.

1 – I had a couple of illustrative examples of extra dubious claims of “provably fair”, including a site that added hand-rigged cryptography that actually made it even less fair for players. Under the scrutiny and bright lights, a lot of these sites seem to have scurried into the dark corners, shutting down and removing themselves entirely from Google Search.

We Must Stop Jealously Citing the Dunning-Kruger Effect

What the Dunning-Kruger study actually demonstrated: Among 65 Cornell undergrads (ergo, a pretty selective, smart bunch to start, likely accustomed to comparing well), the “worst” performing thought their performance would be slightly above the average of the group, while the best performing thought their performance would be highest of all. The average performing thought their performance would also be above average.

The participants had no measure to compare against each other, but from a general perspective were likely, to an individual, far above the normal population average. They also had the difficult task of ranking not by actual performance, but by percentile: It was a situation where one could score 95 out of 100 on a difficult assignment and still end up in at the bottom of the percentile ranking. As a group that shared enormous commonalities (same academic background, life situation, all getting into an exclusive school), there is no surprise that self-evaluations compressed towards the center.

What many in this industry endlessly think the Dunning-Kruger study demonstrated: People who think their performance is above average must actually be below average, and the people who think they are average or below must actually be above average (the speaker almost always slyly promoting their own humility as a demonstration of their superiority, in a bit of an ironic twist. Most rhetoric is self-serving). The shallow meme is that people with confidence in their abilities must actually be incompetent…Dunning-Kruger and all.

Cheap rhetoric turns cringe worthy when it’s cited to pull down others. Do a search for Dunning-Kruger cites on developer forums or blogs and you’ll find an endless series of transparent attempts to pitch why the speaker is better than anyone else.

No one has ever gained confidence, position or ranking by projecting some myth that they think undermines the people who have it. It just makes the speaker look sad. The same thing can be seen in competitive online gaming like CS:GO where everyone better than the speaker is a hacker/spends too much time playing the game, and everyone worse is just naturally less skilled and should delete the game they’re so terrible. It’s good for a laugh, at least until it gains truth through repeated assertion.

This is one of those posts that if you’re a blogger isn’t a good way to grow subscribers: Invariable there are some readers who spend their professional life calling everyone “below” them hacks, and everyone “above” them hacks who suffer the Dunning-Kruger effect. It’s common on any developer related forum. Eh. Thankfully I don’t care about reader numbers.

The Reports of HTML’s Death Have Been Greatly Exaggerated…?

Feedback

Yesterday’s post titled “Android Instant Apps / The Slow, Inexorable Death of HTML” surprisingly accumulated some 35,000 or so uniques in a few hours. It has yielded feedback containing recurring sentiments that are worth addressing.

it is weird the article trying to sell the idea that apps are better posted XKCD images stating otherwise

While there are situations where a native app can certainly do things that a web app can’t, and there are some things it can simply do better, the prior entry wasn’t trying to “sell” the idea that apps are inherently better (and I have advocated the opposite on here and professionally for years where the situation merits). It was simply an observation of Google’s recent initiative, and what the likely outcome will be.

Which segues to another sentiment-

The reverse is happening. Hybrid apps are growing in number. CSS/JS is becoming slicker than ever.

The web is already a universal platform, so why the ████ would you code a little bit of Java for Android instead of writing it once for everything?

In the prior entry I mentioned that some mobile websites are growing worse. The cause of this decline isn’t that HTML5/JS/CSS or the related stack is somehow rusting. Instead it’s that many of these sites are so committed to getting you into their native app that they’ll sabotage their web property for the cause.

No, I don’t want to install your app. Seriously.

Add that the mobile web has seen a huge upsurge in advertising dark patterns. The sort of nonsense that has mostly disappeared from the desktop web, courtesy of the nuclear threat of ad blockers. Given that many on the mobile web don’t utilize these tools, the domain is rife with endless redirects, popovers, the intentionally delayed page re-flows to encourage errant clicks (a strategy that is purely self-destructive in the longer term, as every user will simply hit back, undermining the CPC), overriding swipe behaviors, making all background space an ad click, and so on.

The technology of the mobile web is top notch, but the implementation is an absolute garbage dump across many web properties.

So you have an endless list of web properties that desperately want you to install their app (which they already developed, often in duplicate, triplicate…this isn’t a new thing), and who are fully willing to make your web experience miserable. Now offer them the ability to essentially force parts of that app on the user.

The uptake rate is going to be incredibly high. It is going to become prevalent. And with it, the treatment of the remaining mobile webfugees is going to grow worse.

On Stickiness

I think it’s pretty cool to see a post get moderate success, and enjoy the exposure. One of the trends that has changed in the world of the web, though, is in the reduced stickiness of visitors.

A decade or so ago, getting a front page on Slashdot — I managed it a few times in its hey-day — would yield visitors who would browse around the site often for hours on end, subscribe to the RSS feed, etc. It was a very sticky success, and the benefits echoed long after the initial exposure died down. A part of the reason is that there simply wasn’t a lot of content, so you couldn’t just refresh Slashdot and browse to the next 10 stories while avoiding work.

Having a few HN and Reddit success stories over the past while I’ve noticed a very different pattern. People pop on and read a piece, their time on site equaling the time to read to the end, and then they leave. I would say less than 0.5% look at any other page.

There is no stickiness. When the exposure dies down, it’s as if it didn’t happen at all.

Observing my own uses, this is exactly how I use the web now: I jump to various programming forums, visiting the various papers and entries and posts, and then I click back. I never really notice the author, I don’t bookmark their site, and I don’t subscribe to their feed. The rationale is that when they have another interesting post, maybe it’ll appear on the sites I visit.


This is just the new norm. It’s not good or bad, but it’s the way we utilize a constant flow of information. The group will select and filter for us.

While that’s a not very interesting observation, I should justify those paragraphs: I believe this is the cause of both the growing utilization of dark patterns on the web (essentially you’re to be exploited as much as possible during the brief moment they have your attention, and the truth is you probably won’t even remember the site that tricked you into clicking six ads and sent you on a vicious loop of redirects), and the desperation to install their app where they think they’ll gain a more permanent space in your digital world.

Android Instant Apps / The Slow, Inexorable Death of HTML

Android Instant Apps were announced at the recent Google I/O. Based upon available information1, Instant Apps offer the ability for websites to instead transparently open as a specific activity/context in an Android App, the device downloading the relevant app modules (e.g. the specific fragments and activities necessary for a need) on demand, modularized for only what the context needs.

The querystring app execution functionality already exists in Android. If you have the IMDB app, for instance, and open an IMDB URL, you will find yourself in the native app, often without prompting: from the Google Search app it is automatic, although on third party sites it will prompt whether you want to use the app or not, offering to always use the association.

www.imdb.com/title/tt0472954/

Click on that link in newer versions of Android (in a rendering agent that leverages the standard launch intents), with IMDB installed, and you’ll be brought to the relevant page in that app.

Instant Apps presumably entail a couple of basic changes-

  • Instead of devices individually having a list of app links (e.g. “I have apps installed that registered for the IMDB, Food Network and Buzzfeed domains, so keep an eye out for ACTION_VIEW intents for any of the respective domains“), there will be a Google-managed master list that will be consulted and likely downloaded/cached regularly. These link matches may be refined to a URL subset (where the current functionality is for a full domain).
  • An update to Android Studio / the build platform will introduce more granular artifact analysis/dependency slicing. Already this exists in that an APK is a ZIP of the various binary dependencies (e.g. for each target processor if you’re using the NDK), resources, and so on, however presumably the activities, classes and compiled resources will be bifurcated, their dependencies documented.
  • When you open a link covered by the master list, the device will check for the relevant app installed. If it isn’t found, it will download the necessary dependencies, cache them in some space-capped instant app area, initialize a staged environment area, and then launch the app.

They promise support, via Google Play Services, all the ways back to Android 4.1 (Jellybean), which encompasses 95.7% of active users. Of course individual apps and their activities may use functionality leveraging newer SDKs, and may mandate it as a minimum, so this doesn’t mean that all instant apps will work on all 95.7% of devices.

 

 

The examples given include opening links from a messaging conversation, and from the Google Search app (which is a native implementation, having little to do with HTML).

The system will certainly provide a configuration point allowing a device to opt out of this behavior, but it clearly will become the norm. Google has detailed some enhanced  restrictions on the sandbox of such an instant app — no device identification or services, for instance — but otherwise it utilizes the on-demand permission model and all of the existing APIs like a normal app (detailed here. As is always the case, those who don’t understand this are fear mongering about it being a security nightmare, just as when auto app-updates were rolled out there were a number of can you say bricked? responses).

And to clear up a common misconception, these apps are not run “in the cloud”, with some articles implying that they’re VNC sessions or the like. Aside from some download reductions for the “instant” scenario (Instant Apps are apparently capped at 4MB for a given set of functionality, and it’s tough to understand how the rest of the B&H app fills it out to 37MB), the real change is that you’re no longer asked — the app is essentially forced on you by default — and it doesn’t occupy an icon on your home screen or app drawer. It also can’t launch background services, which is a bonus.

Unfortunately, the examples given demonstrate little benefit over the shared-platform HTML web — the BuzzFeed example is a vertical list of videos, while the B&H example’s single native benefit was Android Pay — though there are many scenarios where the native platform can admittedly provide an improved, more integrated and richer experience.

It further cements the HTML web as a second class citizen (these are all web service powered, so simply saying “the web” seems dubious). I would cynically suggest that the primary motivation for this move is the increased adoption of ad blockers on the mobile HTML web: It’s a much more difficult proposition to block ads within native apps, while adding uBlock to the Firefox mobile browser is trivial, and is increasingly becoming necessary due to the abusive, race-to-the-bottom behaviors becoming prevalent.

And it will be approximately one day before activities that recognize they’re running as instant apps start endlessly begging users to install the full app.

Ultimately I don’t think this is some big strategic shift, and such analyses are usually nonsensical. But it’s to be seen what the impact will be. Already many sites treat their mobile HTML visitors abusively: one of the advocacy articles heralding this move argued that it’s great because look at how terrible the Yelp website has become, which is a bit of a vicious cycle. If Yelp can soon lean on a situation where a significant percentage of users will automatically find themselves in the app, their motivations for presenting a decent web property decline even further.

1 – I have no inside knowledge of this release, and of course I might be wrong in some of the details. But I’m not wrong. Based upon how the platform is implemented, and the functionality demonstrated, I’m quite confident my guesses are correct.

Achieving a Perfect SSL Labs Score with C(++)

A good article making the rounds details how to achieve a perfect SSL Labs Score with Go. In the related discussion (also on reddit) many noted that such a pursuit was impractical: if you’re causing connectivity issues for some of your users, achieving minor improvements in theoretical security might be Pyrrhic.

A perfect score is not a productive pursuit for most public web properties, and an A+ with a couple of 90s is perfectly adequate and very robustly secure for most scenarios.

Striving for 100 across the board is nonetheless an interesting, educational exercise. The Qualys people have done a remarkable job educating and informing, increasing the prevalence of best practice configurations, improving the average across the industry. It’s worth understanding the nuances of such an exercise even if not practically applicable for all situations.

It’s also worth considering that not all web endpoints are publicly consumable, and there are scenarios where cutting off less secure clients is an entirely rational choice. If your industrial endpoint is called from your industrial management process, it really doesn’t matter whether Android 2.2 or IE 6 users are incompatible.

score

So here’s how to create a trivial implementation of a perfect score HTTPS endpoint in C(++). It’s more wordy than the Go variant, though it’s a trivial exercise to parameterize and componentize for easy reuse. And as anyone who visits here regularly knows, in no universe am I advocating creating HTTPS endpoints in C++: I’m a big fan and (ab)user of Go, C#, Java, and various other languages and platforms, but it’s nice to have the options available when appropriate.

This was all done on a Ubuntu 16.04 machine with the typical build tools installed (e.g. make, git, build-essential, autoconf), though of course you could do it on most Linux variants, OSX, Ubuntu on Windows, etc. This exercise presumes that you have certificates available at /etc/letsencrypt/live/example.com/

(where example.com is replaced with your domain. Replace in code as appropriate, or make arguments)

Note that if you use the default letsencrypt certificates, which are currently 2048 bits, the SSL Test will still yield an A+ from the below code however it will yield a slightly imperfect score, with only a score of 90 for the key exchange. In practice a 2048-bit cert is considered more than adequate, so whether you sweat this and update to a 4096-bit cert is up to you (as mentioned in the Go entry, you can obtain a 4096-bit cert via the lego Go app, using the

--key-type "rsa4096"

argument).

1 – Install openssl and the openssl development library.

sudo apt-get update && sudo apt-get install openssl libssl-dev

2 – Create a DH param file. This is used by the OpenSSL for the DH key exchange.

sudo openssl dhparam -out /etc/letsencrypt/live/example.com/dh_param_2048.pem 2048

3 – Download, make, install the libevent v2.1.5 “beta”. Install as root and refresh the library cache (e.g. sudo ldconfig).

https://github.com/libevent/libevent/releases/tag/release-2.1.5-beta

4 – Start a new C++ application linked to libcrypto, libevent, libevent_openssl, libevent_pthreads and libssl.

5 – Add the necessary includes-

#include <iostream>
#include <openssl/ssl.h>
#include <openssl/err.h>
#include <openssl/rand.h>
#include <openssl/stack.h>

#include <event.h>
#include <event2/listener.h>
#include <event2/bufferevent_ssl.h>
#include <evhttp.h>

6 – Initialize the SSL context-

SSL_CTX *
ssl_init(void) {
    SSL_CTX *server_ctx;

    SSL_load_error_strings();
    SSL_library_init();

    if (!RAND_poll())
        return nullptr;

    server_ctx = SSL_CTX_new(SSLv23_server_method());

    // Load our certificates
    if (!SSL_CTX_use_certificate_chain_file(server_ctx, "/etc/letsencrypt/live/example.com/fullchain.pem") ||
            !SSL_CTX_use_PrivateKey_file(server_ctx, "/etc/letsencrypt/live/example.com/privkey.pem", SSL_FILETYPE_PEM)) {
        std::cerr << "Couldn't read chain or private key" << std::endl;
        return nullptr;
    }

    // prepare the PFS context
    EC_KEY *ecdh = EC_KEY_new_by_curve_name(NID_secp384r1);
    if (!ecdh) return nullptr;
    
    if (SSL_CTX_set_tmp_ecdh(server_ctx, ecdh) != 1) {
        return nullptr;
    }

    bool pfsEnabled = false;
    FILE *paramFile = fopen("/etc/letsencrypt/live/example.com/dh_param_2048.pem", "r");
    if (paramFile) {
        DH *dh2048 = PEM_read_DHparams(paramFile, NULL, NULL, NULL);
        if (dh2048 != NULL) {
            if (SSL_CTX_set_tmp_dh(server_ctx, dh2048) == 1) {
                pfsEnabled = true;
            }
        }
        fclose(paramFile);
    }

    if (!pfsEnabled) {
        std::cerr << "Couldn't enable PFS. Validate DH Param file." << std::endl;
        return nullptr;
    }
    
    SSL_CTX_set_options(server_ctx,
            SSL_OP_SINGLE_DH_USE |
            SSL_OP_SINGLE_ECDH_USE |
            SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1);

    if (SSL_CTX_set_cipher_list(server_ctx, "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:AES256:!DHE:!RSA:!AES128:!RC4:!DES:!3DES:!DSS:!SRP:!PSK:!EXP:!MD5:!LOW:!aNULL:!eNULL") != 1) {
        std::cerr << "Cipher list could not be initialized." << std::endl;
        return nullptr;
    }

    return server_ctx;
}

The most notable aspects are the setup of PFS, including a strong, 384-bit elliptic curve. Additionally, deprecated transport options are disabled (in this case anything under TLSv1.2), as are weak ciphers.

ciphers

7 – Prepare a libevent callback that attaches a new SSL connection to each libevent connection-

struct bufferevent* initializeConnectionSSL(struct event_base *base, void *arg) {
    return bufferevent_openssl_socket_new(base,
            -1,
            SSL_new((SSL_CTX *)arg),
            BUFFEREVENT_SSL_ACCEPTING,
            BEV_OPT_CLOSE_ON_FREE);
}

8 – Hook it all together-

int main(int argc, char** argv) {
    SSL_CTX *ctx;
    ctx = ssl_init();
    if (ctx == nullptr) {
        std::cerr << "Failed to initialize SSL. Check certificate files." << std::endl;
        return EXIT_FAILURE;
    }

    if (!event_init()) {
        std::cerr << "Failed to init libevent." << std::endl; 
        return EXIT_FAILURE; 
    } 
    auto base = event_base_new(); 
    auto https = evhttp_new(base); 

    void (*requestHandler)(evhttp_request *req, void *) = [] (evhttp_request *req, void *) { 
         auto *outBuf = evhttp_request_get_output_buffer(req); 
         if (!outBuf) return; 
         switch (req->type) {
            case EVHTTP_REQ_GET:
                {
                    auto headers = evhttp_request_get_output_headers(req);
                    evhttp_add_header(headers, "Strict-Transport-Security", "max-age=63072000; includeSubDomains");
                    evbuffer_add_printf(outBuf, "<html><body><center><h1>Request for - %s</h1></center></body></html>", req->uri);
                    evhttp_send_reply(req, HTTP_OK, "", outBuf);
                }
                break;
            default:
                evhttp_send_reply(req, HTTP_BADMETHOD, "", nullptr);
                break;
        }
    };

    // add the callbacks
    evhttp_set_bevcb(https, initializeConnectionSSL, ctx);
    evhttp_set_gencb(https, requestHandler, nullptr);
    auto https_handle = evhttp_bind_socket_with_handle(https, "0.0.0.0", 443);

    event_base_dispatch(base);

    if (event_dispatch() == -1) {
        std::cerr << "Failed to run message loop." << std::endl;
        return EXIT_FAILURE;
    }

    return 0;
}

Should you strive for 100? Maybe not. Should you even have SSL termination in your C(++) apps?  Maybe not (terminate with something like nginx and you can take advantage of all of the modules available, including compression, rate limiting, easy resource ACLs, etc). But it is a tool at your disposal if the situation is appropriate. And of course the above is quickly hacked together, non-production ready sample code (with some small changes it can be made more scalable, achieving enormous performance levels on commodity servers), so use at your own risk.

Just another fun exercise. The lightweight version of this page can be found at https://dennisforbes.ca/index.php/2016/05/23/achieving-a-perfect-ssl-labs-score-with-c/amp/, per “Hanging Chads / New Projects / AMPlified“.

Note that this is not the promised “Adding Secure, Authenticated HTTPS Interop to a C(++) Project” piece that is still in work.  That undertaking is more involved with secure authentication and authorization, custom certificate authorities, and client certificates.

Disappearing Posts / Financing / Rust

1984

While in negotiations I have removed a few older posts temporarily. The “Adding Secure, Authenticated HTTPS Interop to a C(++) Project” series, for instance.

I can’t make the time to focus on it at the moment and don’t want it to sit like a bad promise while the conclusion awaits (and for technical pieces I really try to ensure 100% accuracy which is time consuming), and will republish when I can finish it. I note this given a few comments where helpful readers thought some sort of data corruption or transactional rollback was afoot. All is good.

Rust

Occasionally I write things on here that lead some to inaccurately extrapolate more about my position. In a recent post, for instance, I noted that Rust (the system language) seems to be used more for advocacy — particularly of the “my big brother is tougher than your big brother” anti-Go sort — than in creating actual solutions.

chain-109302_640
This wasn’t a criticism of Rust. So I was a bit surprising when I was asked to write a “Why Go demolishes Rust” article (paraphrasing, but that was the intent) for a technical magazine.

I don’t think Go demolishes Rust. Rust is actually a very exciting, well considered, modern language. It’s a bit young at the moment, but has gotten over the rapid changes that occurred earlier in its lifecycle.

Language tourism is a great pursuit for all developers. Not only do we learn new tools that might be useful in our pursuits, at a minimum we’ll look at the languages we do use and leverage daily in a different way, often learning and understanding their design compromises and benefits through comparison.

I would absolutely recommend that everyone give Rust a spin. The tutorials are very simple, the feedback fast and rewarding.

Selling Abilities

When selling oneself, particularly in an entrepreneurial effort where you’re the foundation of the exercise and your abilities are key, you can’t leverage social lies like contrived self-deprecation or restraint. It’s pretty much a given that you have to be assertive and confident in your abilities, because that’s ultimately what you’re selling to people.

This doesn’t mean claims of infallibility. Instead that you have a good understanding of what you are capable of doing based upon empirical evidence, and are willing and hoping to be challenged on it.

A few days ago I had to literally search whether Java passes array members by reference or value (it was a long day of jumping between a half dozen languages and platforms). I’m certainly fallible. Yet I am fully confident that I can quickly architect and/or build an excellent implementation of a solution to almost any problem. Because that’s what my past has demonstrated.

Generally that goes well. Every now and then, however, I’ve encountered someone who is so offended by pitch confidence that, without bothering to know a thing about me, my accomplishments, or taking me up on my open offer to demonstrate it, they respond negatively or dismissively. This seems to be particularly true among Canadians (I am, of course, a Canadian. This country has a very widely subscribed to crab mentality, however, with a “who do you think you are?” sort of natural reaction among many). Not all Canadians by any measure, but enough that it becomes notable when you regularly deal with people from other countries and start to notice that stark difference.

 

Beautiful Code

All code is born ugly.

It starts disorganized and inconsistent, with overlaps and redundancies and gaps.

We begin working it into an imperfect solution for an often poorly defined problem.

potter-1139047_640

As we start building up like clay, a solution starts taking form. The feedback guides us in moving, removing and adding material. It allows us to add and remove details. We learn from our mistakes.

As we iterate, the problem itself becomes clearer. We focus on the problem from the optics of possible solutions.

Every project follows this path. All code is born ugly. This is the technical debt that every project incurs in those early days, and that is only paid off through iterations. For many projects the final form is an evasive goal that’s always just out of grasp.

Occasionally someone will believe that they have so much experience that they can circumvent the ugly code step. Extensive up front design, planning, standards and guidelines. Start as a swan.

swan-293157_640

This yields the ugliest code of all. It yields the poorly suited, overly abstracted solutions that solidify like concrete, forever ill-suited for the problem because the feedback loop was circumvented. Grotesquely overwrought solutions for the most trivial of problems, enormous line-counts of boilerplate, unoriginal code for the most banal of needs. These projects become the ugliest ducklings of all.

Intel’s Decelerating Mobile Push -or- Maybe Bet Against Intel?

A year and a half ago I wrote an entry on here regarding Intel in the mobile space. The argument was basically that Intel was finally getting their stuff together, and the market had gotten ready for Intel and x861 (as well as x86_64) to be a fully supported platform.

From Unity to the NDK to AVDs, Intel is now a first-class platform on Android.

But the industry runs at a very different cost and profit model from what Intel was accustomed. The highest-end ARM SoCs run from $30 – $70 per unit. Intel has long lived in a world where their solutions net hundreds to thousands of dollars per unit. But the market changes, and the ARM world isn’t going away if Intel just looks the other way.

Yet Intel seems to have just killed off their aspirations for the market. Their intentionally sabotaged Atom solutions are being bested by small competitors, and they can’t make the finances work.

Bizarre. I find it hard to believe, especially given that Intel has made significant noise about targeting the IoT market. I think the conclusions that people are drawing about Intel killing off the mobile Atom devices and a noncompetitive radio chipset — concluding that Intel is crawling back into their desktop and server processor shell, ceding defeat — highly unlikely.

More likely, I would guess that Intel is going to follow Nvidia’s lead, as there’s no way they’re simply giving up on mobile devices. Nvidia once had separate mobile and desktop engineering, with the duplicated costs that entailed, but with their Maxwell chipset the same designs, architectures and processes are used on both sides of the fold.

I expect Intel to pursue the same approach, simply scaling up and down their common contemporary core to all needs. There are Skylake processors available right now with a TDP of 7.5W (which is the going range for tablet SoCs). Core M processors with a TDP below 4W. The Atom processors didn’t serve a particular need beyond being sabotaged just enough that they didn’t threaten the more expensive markets. That approach doesn’t work anymore.

1 – As an aside, it’s impossible to discuss x86(_64) without someone confidently announcing that it’s a derelict bad design that deserves to die, etc, carrying on an argument from literally the late 1980s to the early 1990s. This betrays a general ignorance about the state of x86_64 vs ARM64, or the enormous complexities of modern ARM chips (with absolutely staggering transistor counts). They’re both great solutions.

Email Addresses Need A Checksum

I get other people’s email.

I grabbed a Google account very early, in the invitation days, and got the first.last@gmail.com gold standard (which by Google rules means I also have firstlast@gmail.com, fir.st.l.a.s.t@gmail.com, etc. These derivatives can be a powerful, but often just confuse people).

Since then I’ve gotten thousands of emails intended for other people. From grocery stores. Art dealers. Hairdressers. Car rental agencies. Hoteliers. Flight itineraries. School newsletters and personal appeals. Square receipts. Alumni groups.

Where possible, when email is sent by a real human being and not a black-hole noreply source, I try to alert people to update their addresses, though it’s surprising how often the issue repeats anyways.

All of these were presumably intended for people sharing variations of my name (e.g. Denis), or with the same name but who had to resort to some sort of derivative such as firstMlast@gmail.com.

Many of the errant emails have privileged or time sensitive information, and a lot of them are actionable.

Square receipts allowing me to rate the retailer and leave feedback, alongside some CC details. Hotel reservations that allow me to cancel or change the reservation with absolutely no checks or controls beyond that the email is in hand. Rewards cards through which I can redeem or transfer points.

Some have highly personal, presumably confidential information.

emotions-371238_640

In many if not most of these cases the email address was likely transmitted verbally1. To the retailer, grocery store clerk, or over a reservation phone line to a travel agent or hotel representative. Alternately it might have been entered on some second screen device (my iCloud account receives the email for more than one stranger’s Facebook accounts).

For a vanity domain it usually means it goes to some ignored catch-all, but on a densely populated host like gmail it yields deliveries of possibly sensitive data to the wrong people, as almost every variation is occupied.

Email addresses should have a checksum. A simple mechanism through which human beings can confirm that information was conveyed properly. Even the most trivial of checksums would provide value, eliminating the vast majority of simple mistakes.

For instance to calculate a CRC32 of a variety of email address derivatives, displaying the base32 (32 digits, or 5 bits each digit, whereas the 32 in CRC32 refers to bits) of the bottom 5 bits of the most and then least significant bytes (totally arbitrary, but sound. This is extremely trivial in a world where launch vehicles are landing on floating barges) would yield-

first.last@gmail.com EW
first.lst@gmail.com 6U
firstMlast@gmail.com XM
frst.last@gmail.com ZS

“My email address is f  i  r  s  t   period   l a s t @ g m a i l . c o m”

“Okay, got it. 6U?”

“Nope, I must have misspoken. Let me restate that – … ”

“Okay, got it. EW?”

“Perfect!”

(and of course every user would quickly know and remember their checksum. This wouldn’t be something the user is calculating on demand)

When I’m forced to use my atrophied hand-writing to chicken scratch an email address on a form, a simple two digit checksum should yield a “go / no go” processing of the email address: If it isn’t a valid combination (whether because the email address or the checksum aren’t being interpreted correctly), contact me to verify, and certainly don’t start sending sensitive information.

Two digits of base32 yields 10-bits of entropy, or 1024 variations. Obviously this is useless against intentional collisions, but against accidental data “corruption” it would catch errors 99.9%+ of the time.

Technical Aside: Email addresses theoretically can contained mixed case, but in practice the vast majority of the email infrastructure is case-insensitive.

The Pragmatic Footer

Gmail and the other vendors aren’t going to start displaying email address checksums. Forms and retailers and Square aren’t going to start changing their apps and forms to capture or display email data entry checksums.

As with prior “improve the system” exercises, it’s more a theoretical while discussing the concepts that we regularly encounter. While it doesn’t work for telephone exchanges, more data transfer should be happening via NFC or temporary QR codes than being verbally relayed.

It’s a fun thought exercise to go back and think of how the system could have been improved from the outset, given the reality that information transfer is often human and thus imperfect. For instance all email address have a standardized checksum suffix – first.last+EW@gmail.com. Or whatever.

If you develop a system where humans verbally or imperfectly transmit information, and it’s important that it is stated and understood correctly, consider a checksum.

 

1 – I had a speech impediment as a young child, courtesy of a Jamie Oliver-esque mega tongue that was trying to escape the confines of my mouth. This made me more aware of the general sloppiness of verbal data transmissions as a problem, later noticing that it’s a fairly universal issue.

Code: It’s Trivial

Everyone is going crazy about a purported $1.4 million dollar random arrow app for the TSA. It didn’t take long before a developer “duplicated” it in 10 minutes.  With some practice they could easily get it down to twenty seconds.

$252 million dollars an hour!

Not that such a demonstration means much. Developers can make a veneer simile of almost anything not overly computationally complex in short shrift. I could spin out a superficial Twitter “clone” in a few hours. Where’s my billions in valuation?

As Atwood said a few years ago (as everyone declared how easily they could make Stack Overflow clones) – Code: It’s Trivial (his article making my choice of title trivial). The word trivial is used and abused in developer circles everywhere, used to easily deride almost every solution, each of us puffing up our chests and declaring that we could totally make Facebook in a weekend, Twitter in afternoon, and YouTube the next morning. We could make the next Angry Birds, and with Unity we could totally storm the market with a new 3D shooter if we wanted.

Because it’s all trivial. We could all do everything with ease.

It later turned out the app itself actually cost $47,000, which is still a pretty good chunk of change for such a seemingly simple app. Only $8,460,000 per hour.

But the amount of time spent in the IDE is close to irrelevant, as anyone who has ever worked in a large organization knows. These sorts of exercises are universally nonsensical. This method of evaluating the cost of a solution is pure nonsense.

I’m not defending the TSA, their security theater, the genesis or criteria for this app, or even saying that it isn’t trivial — by all appearances it seems to be. But knowing that the TSA decided that this is what they were going to do, $47,000 doesn’t sound particularly expensive at all.

Some senior security guy didn’t say “We need x. Do x.” and a day later they had an arrow app. As two large organizations they most certainly had planning meetings, accessibility meetings. They likely argued aesthetics of arrows. They put in checks and conditions to lock the user in the app. They likely allow for varying odds ratios (total conjecture on my part, but I doubt it was a fixed 50:50, and likely had situational service-based variations depending upon overrides for manpower restrictions), etc. Still not in any universe a significant application, but the number of things that people can talk about, question, probe, and consider grows exponentially. The number of possible discussions explodes.

Then documentation, training material (yes, line level workers really need to be trained in all software), auditing to ensure it actually did what it said it did (developers regularly mess up things as simple as “random number” usage), etc.

In the end, $47,000 for a piece of software deployed in an enormous organization, in a security capacity….I’m surprised that the floor for something like this isn’t a couple of magnitudes higher.

Nothing — nothing — in a large organization is trivial. Nothing is cheap. Ever.