AMP Isn’t All Bad

AMP (Accelerated Mobile Pages) is generally reviled in the tech community. Highly critical pieces have topped Hacker News a number of times over the past couple of months. That Register piece ends with the declaration “If we reject AMP, AMP dies.“, which you can ironically read in AMP form.

The complaint is that AMP undermines or kills the web. A lesser complaint that it has poor usability (though not all criticism has held up).

Web Developers Can’t Stop Themselves

Facebook has Instant Articles. Apple has News Format. Google has AMP.

Everyone can leverage AMP, whether producer or consumer. Bing already makes use of AMP in some capacity, as can any other indexer or caching tier. AMP, if available, is publicly announced on the source page (via a link rel=”amphtml” tag) and available to all, versus the other formats that are fed directly into a silo. A quick survey of the Hacker News front page found almost half of the entries had AMP available variants, made possible given that exposing AMP is often nothing more than a simple plug-in on your content management system (and would be a trivial programming task even on a greenfield project).

The impetus for these varied formats is the harsh reality that the web has been abused, and is flexible to the point of immolation. This is especially prevalent on mobile where users are less likely to have content blockers or the ability to easily identify and penalize abusive behaviors.

Auto-play videos, redirects (back capture), abusive ads, malicious JavaScript even on reputable sites, model dialogs (subscribe! follow us on Facebook!), content reflowing that happens dozens of times for seconds on end (often due to simple excessive complexity, but other times an intentional effort to solicit accidental ad clicks as content moves). Every site asking to send desktop notifications or access your location. Gigantic video backgrounds filling the above the fold header for no particular reason.

In an ideal world web properties would refrain from such tragedy of the commons behaviors, worried about offending users and on their best behavior. The prevalent usage doesn’t motivate that, however: many simply see whatever tops Hacker News or Reddit or trending on Facebook and jump in and out of content sources, each site having incredibly little stickiness. The individual benefit of good behavior for any particular site declines.

Bad behavior worsens. Users become even less a check on practices. The good emergent sites suffer, everyone sticking to a tiny selection of sites that they visit daily. It parallels the Windows software download market where once we freely adopted whatever was new and interesting, but after pages of toolbars and daemons and malware many just install the basics and take no risks, new entrants finding no market for their efforts.

AMP (and the other options) is the natural outcome of the wild web. It represents padded walls that constrains bad behavior, giving the content priority. It isn’t appropriate for rich web apps, or even marginally interactive pieces like my bit on floating point numbers, but for the vast majority of media it is a suitable compromise, representing an excellent compromise of the power of HTML with the constraint to yield a speedily rendering, low resource utilization solution. Most AMP pages rendering extraordinarily quickly, with absolutely minimal CPU and network usage. Yes, sites could just optimize their content without being forced to, but somehow we’ve been moving in exactly the opposite direction for years. A simple cooperative effort will never be fruitful.

Google thus far has stated that they do not prioritize AMP content in search results, and given how fervently the SEO industry watches their rankings this isn’t as cloaked as one might imagine. They do, however, have a news carousel for trending topics (e.g. “news”), and most if not all of those entries in the carousel are AMP pages on mobile.

The news carousel has merited significant criticism. For instance a given carousel has a selection of items on a trending topic (e.g. Trump), and swiping within one of the articles brings you to the next in the carousel. As a user, this is fantastic. As a publisher, it’s an attack on non-consumption, easily giving users a trustworthy, speed mechanism of consuming their content (and seeing their ads and their branding, etc).

Other criticism is more subtle. For instance all AMP pages load a script at https://cdn.ampproject.org/v0.js, which of course is served up by a surrogate of Google. This script has private caching and is mandated by the spec, and is utilized for metrics/tracking purposes. Ideally this script would be long cached, but currently it is held for just 50 minutes. If a browser were leveraging AMP it could theoretically keep a local copy for all AMP content, disregarding the caching rules.

And most criticisms are just entirely baseless. Claims that it renders content homogeneous and brand-less, for instance, yet each site can drop a header with a link to their site, as always, just as they always could. For instance The Register does in the initially linked piece, with a logo and link to the homepage. And then there’s simple user confusion, like the blogger who claimed that Google was “stealing” his traffic after he enabled AMP and discovered that yes, AMP implies allowing caching.

Be Charitable

The root of most toxicity on online conversation boards is a lack of charity: Assuming that everyone who disagrees or does something different is an idiot, malicious, has ill intent or is a part of a conspiracy, etc. I could broaden that out and say that the root of most toxicity throughout humanity comes from the same source. If people realized that others just made mistakes when they made a dumb move on the roadway — the foibles of humanity — instead of taking it as a personal affront that must be righted, we’d all have less stressful lives.

This applies to what businesses do as well. We can watch moves like AMP and see only the negatives, and only malicious, ad-serving intentions, or we can see possible positive motives that could potentially benefit the web. Google has developed AMP openly and clearly, and has been responsive to criticism, and the current result is something that many users, I suspect, strongly benefit from.

I’d take this even further and say that the model should be carried to a “HTML Lite” that rolls back the enormous flexibility of HTML5 to a content-serving subset, much like AMP but geared for the desktop or other rich clients. If we could browse in HTML Lite on the majority of sites, enabling richness only for those few that make a credible case, it would be fantastic for the web at large.

Small Optimizations

A couple of months ago I posted an entry titled Rust Faster Than C? Not So Fast. It was simple analysis of the k-nucleotide benchmark game, my interest piqued by Rust taking the lead over C.

After a brief period of profiling I proposed some very simple changes to the C entrant, or rather to the hash code that it leverages (klib) that would allow C to retake the lead. It was pursued purely in a lighthearted fashion, and certainly wasn’t a dismissal of Rust, but instead was just a curiosity given how simple that particular benchmark was (versus other benchmarks like the regex benchmark where Rust’s excellent regular expression library annihilates the Boost regex implementation, which itself eviscerates the stdlib regex implementation in most environments).

The changes were simple, and I take absolutely no credit for the combined work of others, but given that a number of people have queried why I didn’t publish it in a normal fashion, here it is.

Original Performance
3.9s clock time. 11.0s user time.

[on the same hardware Rust yields 3.6s clock time, 9.8s of user time, taking the lead]

Remove a single-line pre-check that while ostensibly for speed-ups, actually was a significant net negative.

Updated Performance
3.3s clock time. 9.55s user time.

Such a trivial change, removing what was assumed to be a performance improvement, yielded a 15% performance improvement. This particular change has zero positive negative impact so I have submitted a pull request.

Switch the flags from bit-packing into 32-bit integers to fixed-position byte storage

Updated Performance
2.9s clock time. 7.4s user time.

As mentioned in the prior post, the implementation used a significant amount of bit packing, necessitating significant bitshifting. By moving to a byte storage it remains incredibly efficient for a hash table, but significantly reduces high-usage overhead.

If the first pull request is accepted, C will take the lead again. Does it matter? Not at all. But it’s the fun of the spirit of the game.

Regardless, as I finally start contributing to public projects it was an opportunity to share the notion of profiling and the high impact of incredibly simple changes.

Sleepless Nights for Software Developers

A recent Ask HN raised the question “What do you use Machine Learning for?“, and the top answer, by ashark, is golden-

I use it as something to worry about not knowing how to use, and how that might make me unemployable in a few years, while also having no obvious need for it at all, and therefore no easy avenue towards learning it, especially since it requires math skills which have completely rusted over because I also never need those, so I’d have to start like 2 levels down the ladder to work my way back up to it.

I’ve found it very effective in the role of “source of general, constant, low-level anxiety”.

This is an accurate assessment for most of us. We anxiously watch emerging technologies, patterns and practices, trying to decide where to focus some of our limited time. Worried about missing something and finding ourselves lost in the past with a set of obsolete skills. So we endlessly watch the horizon, trying to separate the mirages from the actual, deciding what to dive into.

I recently started casually diving into TensorFlow / machine learning. TensorFlow is the machine learning toolset released by Google, and represents the edge of the current hype field (ousting the industry trying to fit all problems into blockchain-shaped solutions, which itself relegated nosql to the trough of disillusionment). It’s just layers and layers of Things I Don’t Know.

GRPC. Bazel. SciPy. Even my Python skills are somewhat rusty. Most of the maths I’m still fairly adept at, and have a handle on CUDA and the Intel MKL, but getting TensorFlow to build on Windows, itself a remarkably painful process (at least in my experience, where having VS2017 and VS2015 on the same machine is a recipe for troubles, and while you can just install the binaries I am a fan of working with a build to allow me to dive into specific operations), yields such an enormous base of dependencies that it gives the feeling of operating some enormously complex piece of machinery. It was much easier to build on Ubuntu, but still represents layers and layers of enormously complex code and systems.

It’s intimidating. It’s the constant low-level stress that we all endure.

And the truth is that the overwhelming majority of tasks in the field will never need to directly use something like Tensorflow. They might use a wrapped, pre-trained and engineered model for a specific task, but most of us will never yield payoff for an in-depth knowledge of such a tool, beyond satisfying intellectual curiosity.

But we just don’t know. So we stay awake at night browsing through the tech sites trying to figure out the things we don’t currently know.

EDIT: I should add the disclaimer that I’m not actually losing sleep over this. I’m actually calm as a Hindu cow regarding technology, and probably am a little too enthused about new things to learn. But it still presents an enormous quandary when my son asks, for instance, what he should build a service layer for a game he’s building. “Well….”

Social Anxiety and the Software Developer

A brief little diversionary piece that I hope will prove useful for someone out there, either in identifying their own situation, or in understanding it in others. This is a very selfish piece — me me me — but I hope the intent can be seen in a good light. I suspect that the software development field draws in a lot suffering social anxiety.

This piece is in the spirit of talking openly and honestly about mental health, which is something that we as a community and a society don’t do enough.

A couple of months ago I endured (and caused others to endure) a high stress event. I certainly haven’t tried to strike it from memory (the internet never forgets), and in many ways a lot of positives have come from it and it has been a profound period of personal growth since.

One positive is that I finally faced a lifelong burden of social anxiety, both pharmacologically and behaviorally, a big part being simply realizing that it was a significant problem. I know from emails to my previous mention of enduring this that it struck some readers as perplexing: I’ve worked in executive, lead, and senior positions at a number of organizations. I have a domain under my own name and put myself out there all the time1. I’m seemingly very self-confident, if not approaching arrogance at times.

That isn’t just a facade: I am very confident in my ability to face an intellectual or technical challenge and defeat it. In the right situation I am forceful with my perspective (not because it’s an opinion strongly held, but because I think it’s right, but will effortlessly abandon it when convinced otherwise).

Confidence isn’t a solution to social anxiety, however. It’s possible if not probable for them to live in excess alongside each other. In many ways I think an bloated ego is a prerequisite.

Many choices — as trivial as walking the dog — were made under the umbrella of avoiding interactions. Jobs were avoided if they had a multi-step recruitment process. Investments were shunned if they weren’t a singular solution to everything, and even then I would avoid the interactions necessary to get to a resolution.

I succeeded in career and personally entirely in spite of these handicaps, purely on the back of lucking into a skillset at a perfect time in history. I am utterly convinced that at any other time in history this would have been devastating to any success. Be good at something and people overlook a lot.

And it was normalized. One of the things about this reflective period is that suddenly many of the people who I know and love realized “Hey, that was pretty strange…” It seemed like a quirk or like being shy (which we often treat as a desirable trait), but in reality it was debilitating, and had been from my formative years.

There are treatments for it. I’m two months into this new perspective and I can say that the results are overwhelming. I will never be a gregarious extrovert, but life is so much less stressful just living without dreading encountering a neighbour, or getting a phone call, etc.

1 – The online existence is almost abstract to me, and I’ve always kept it that way. I have always dreaded people who I know in “real life” visiting this blog (sometimes family or coworkers have mentioned a piece and it has made me go silent for months, hoping to lose their interest), reading any article I’ve written or anything written about me, etc. That is too real, and was deeply uncomfortable to me. Nonetheless there have been times I’ve realized I said something in error and a cold sweat overcomes me, changing all plans to get to a workstation and fix the error.

Floating-Point Numbers / An Infinite Number of Mathematicians Enter A Bar

tldr; The bits of a floating-point number have shifting exponents, with negative exponents holding the fractional portion of real numbers.

The Basics

Floating-point numbers aren’t always a precise representation of a given real or integer value. This is CS101 material, repeated on programming boards regularly: 0.1 + 0.2 != 0.3, etc (less known, for a single-precision value 16,777,220 = 16,777,219 & 4,000,000,000 = 4,000,000,100).

Most of us are aware of this limitation, yet many don’t understand why, whether they missed that class or forgot it since. Throughout my career I’ve been surprised at the number of very skilled, intelligent peers – brilliant devs – who have treated this as a black box, or had an incorrect knowledge of their function. More than one has believed such numbers stored two integers of a fraction, for instance.

So here’s a quick visual and interactive explanation of floating-point numbers, in this case the dominant IEEE 754 variants such as binary32, binary64, and a quick mention of binary16. I decided to author this as a lead-in to an upcoming entry about financial calculations, where these concerns become critical and I need to call out to it.

I’m going to start with a relevant joke (I’d give credit if I knew the origin)-

An infinite number of mathematicians walk into a bar. The first orders a beer, the second orders half a beer, the third orders a quarter of a beer, and so on. The bartender pours two beers and says “know your limits.”

In the traditional unsigned integer that we all know and love, each successively more significant bit is worth double the amount of the one before (powers of 2) when set, going from 20 (1) for the least significant bit, to 2bitsize-1 for the most significant bit (e.g. 231 for a 32-bit unsigned integer). If the integer were signed the top bit would be reserved for the negative flag (which entails a discussion about two’s-complement representations that I’m not going to enter into, so I’ll veer clear of negative integers).

An 8-bit integer might look like (the bits can be toggled for no particular reason beyond keeping the attention of readers)-

Why stop at 20 (1) for the LSB? What if we used 16 bits and reserved the last 8 bits for negative exponents? For those who remember basic math, a negative exponent -x is equal to 1/nx, e.g. 2-3 = 1/23 = 1/8.

Behold, a binary fixed-point number: Click on some of the negative exponents to yield a real number. The triangle demarcates between whole and fractional exponents. In this case the number has a precision of 1/256, and a max magnitude of 1/256th under 256.

We’re halfway there.

A floating-point number has, as the name states, a floating point. This entails storing a separate value detailing the shift of the exponents (thus defining where the point lies — the separation between the exponent 0 and negative exponents, if any).

Before we get into that, one basic about floating-point numbers: They have an implicit leading binary 1. If a floating-point value had only 3 value/fraction bits and they were set to 000, the actual value of the floating-point is 1000 courtesy of this leading implicit bit.

To explain the structure of a floating-point number, a decimal32 — aka single-precision — floating-point number has 23 mantissa bits (the actual value, sometimes called the fraction) and an implicit additional top bit of 1 as mentioned, ergo 24 bits defining the value. These are the bottom 23 bits in the value: bits 0-22.

The exponent shift of a single-precision value occupies 8-bits and while the standard allows for that to be a signed 8-bit integer, most implementations use a biased encoding where 127 (e.g. 01111111) = 0 such that the exponent shift = value – 127 (so below is incrementally negative, above is incrementally positive). A value of 127 indicates that the [binary decimal point/separation between 0 and negative exponents] lies directly after the implicit leading 1, while <127 move it successively to the right, and >127 numbers move it to the left. The exponent bits sit above the mantissa, occupying bits 23-30.

At the very top — the most significant bit — lies a flag indicating whether the value is negative or not. Unlike with two’s-complement values seen in pure integers, with floating point numbers a single bit swaps the value to its inverse value. This is bit 31.

“But how can a floating-point value hold 0 if the high bit of the value/mantissa/fraction is always 1?”

If all bits are set to 0 — the flag, exponent shift and the value — it represents a value 0, and if just the flag is 1 it represents -0. If the exponent shift is all 1s, this can indicate either NaN or Inf depending upon whether the fractional portion has values set. Those are the magic numbers of floating points.

Let’s look at a floating-point number, starting with one holding the integer value 65535, with no fractional.

With this sample you have the ability to change the exponent shift — the 8-bit shift integer of the single-precision floating point — to see the impact. Note that if you were going to use this shift in an actual single-precision value, you would need to add 127 to the value (e.g. 10 would become 137, and -10 would be 117).

The red bordered box indicates the implicit bit that isn’t actually stored in the value. In the default state again it’s notable that with a magnitude of 65535 — the integer portion occupying 15 real bits and the 1 implicit bit — the max precision is 1/256th.

If instead we stored 255, the precision jumps to 1/65536. The precision is dictated by the magnitude of the value.

To present an extreme example, what if we represented the population of the Earth-

Precision has dropped to 29 — 512. Only increments of 512 can be stored.

More recently the industry has seen an interest in something called half-precision floating point values, particularly in compute and neural net calculations.

Half-precision floating point values offer a very limited range, but fit in just 16-bits.

That’s the basic operation of floating point numbers. It’s a set of value bits where the exponent range can be shifted. Double-precision (binary64) floating points up the value/fraction storage to 53-bits (52-bits stored, plus the 1 intrinsic bit), and the exponent shift to 11 bits offering a far greater precision and/or magnitude, coming closer to the infinite number of mathematicians at representing small scale numbers. I am not going to simulate such a number on here as it would exceed the bounds of reader screens.

Hopefully this has helped.

Basic Things You Should Know About Floating-Point Numbers

  • Single-precision floating point numbers can precisely hold whole numbers from -16,777,215 to 16,777,215, with zero ambiguity. Many users derive the wrong assumption from the fractional variances that all numbers are approximations, but many (including fractional representations that fit within the magnitude — e.g. 0.25, 0.75, 0.0042572021484375, etc — can be precisely stored. The key is that the number is the decimal representation of a fraction where the denominator is a power of 2 and lies within the precision band of a given magnitude)
  • Double-precision floating point numbers can precisely hold whole numbers from -9,007,199,254,740,991 to 9,007,199,254,740,991. You can very easily calculate the precision allowed for a given magnitude (e.g. if the magnitude is between 1 and 2, the precision is within 1/4503599627370496, which for the vast majority of uses is well within any reasonable bounds.
  • Every number in JavaScript is a double-precision floating point. Your counter, “const”, and other seeming integers are DPs. If you use bitwise operations it will temporarily present it as an unsigned 32-bit int as a hack. Some decimal libraries layer atop this to present very inefficient, but sometimes necessary, decimal representations.
  • Decimal types can be mandated in some domains, but represent a dramatic speed compromise (courtesy of the reality that our hardware is extremely optimized for floating-point math). With some analysis of the precision for a given task, and intelligent rounding rules, double-precision is more than adequate for most purposes. There are scenarios where you can pursue a hybrid approach: In an extremely high Internal Rate of Return calculation I use SP to get to an approximate solution,
    and then decimal math to get an absolutely precise solution (the final, smallest leg).
  • On most modern processors double-precision calculations run at approximately half the speed of single-precision calculations (presuming that you’re using SIMD, where an AVX unit may do 8 DP calculations per cycle, or 16 SP calculations per cycle). Half-precision calculations, however, do not offer any speed advantage beyond reducing memory throughput and scale necessary. The instructions to unpack and pack binary16 is a relatively new addition.
  • On most GPUs, double-precision calculations are dramatically slower than single-precision calculations. While most processors have floating point units that perform single-precision calculations on double-precision hardware or greater, most offering SIMD to do many calculations at once, GPUs were built for single-precision calculations, and use entirely different hardware for double precision calculations. Hardware that is often in short supply (most GPUs offer 1/24 to 1/32 the number of DP units). On the flip side, most GPUs use SIMD on single-precision hardware to do multiple half-precision calculations, offering the best performance of all.
  • Some very new compute-focused devices offer spectacular DP performance. The GP100 from nvidia offers 5 TFLOPS of DP calculations, about 10 TFLOPS of SP calculations, and 20 TFLOPS of half-precision calculations. These are incredible new heights.