Yet More Bits and Bytes

Again I must offer apologies for the lack of real content. It has been an exciting, busy period. Many of the pieces I’ve started over the past few months still sit in a draft state.

As one change, I recently updated my work laptop to a Lenovo Yoga 720. I don’t normally post about random hardware, and certainly have never mentioned the procession of laptops that preceded this one, but I simply love this laptop. I’ve always viewed my laptops as a poor alternative to my desktop when the situation absolutely demands, but this is the first laptop that I actually want to use (and the first that offers a battery life remotely approximating its claims). Outrageously fast 1TB SSD (1.5GB/s at times, and of course ridiculous random access times), i7, gorgeous 4K screen, and work-day long battery life. Most importantly, given a lot of recent work I’ve been engaged in, it has a GTX 1050, allowing me to pound out some convolution neural networks on it. 2GB of GPU ram limits the scope of the network, but still offers extraordinarily opportunities to hash out a lot of solutions that I can then move to the less portable Titan V. I don’t even use the convertible or pen functionality, and very seldom use the touch screen, though they came along for the ride.

The one real weakness of the 720 is that its Thunderbolt 3 port (e.g. through the USB C connector) only has 2 PCI-e lanes, or about 16 Gbps of bandwidth if my recall is correct. I contemplated putting the Titan V in an external enclosure and tasks that are heavily bandwidth bound could be limited by this. For gaming this restriction is unlikely to be relevant, but it could come into play for workloads that need to constantly move working data to and from the GPU. I do plan on benchmarking it at some point, as having the GPU external from my desktop is ideal for both flexibility and heat dissipation.

The Titan V is simply outrageously powerful, on that topic. Whether double, single, or half precision, it is incredible for classic networks and scientific or financial uses, but add in the Tensor cores and it reaches extraordinary heights. An incredible processing value.

To continue this diversion, it is astonishing how absolutely AMD has screwed up their opportunities in the deep learning and even scientific community. OpenCL is an afterthought, and everything, it seems, is built only for CUDA. They tried a final minute hail mary with HIP, but clearly gave it too little resources to really make it credible. It will be interesting to see how they try to recover — competition would be good for everyone — but as is it’s an nvidia world.

Other than that, I solemnly promise to get a couple of technical pieces published shortly. Swearsies. And they’ll be great.

Bits and Bytes

Professional obligations have delayed some technical pieces but for now just wanted to pound out a few quick thoughts while the coffee brews in the French press (oooh la la la).

On Tablets

Five years ago I made my first Apple purchase ever, adding a 3rd generation iPad to the household.

It has been a remarkable value. The pad has served the family well over many thousands of hours of service, over a thousand full charges, and still last for hours. The screen still looks great (something that wouldn’t necessarily be true were it OLED-based — I love OLED but it is does have downsides for long-term heavy usage). From a cost-versus-usage perspective it is quite possibly the most effective purchase I’ve ever made, the Nexus 7 2013 just slightly behind.

The device is no longer supported in recent iOS updates, and is showing its age on the speed front.

It was finally replaced, or at least augmented with a new model.

The iPad is without peers, and what once was a competitive space has been winnowed out to junk extremely low-end Android devices, the iPad in the value space (that original 3rd gen was $620CAD. The 7th gen replacement is $399CAD), and then Samsung devices up with top-tier iPads and Surfaces in the high end.

If there’s a question about what tablet to buy, the answer is iPad 9 times out of 10. And even with our smartphones getting larger, having a pad to fall back to when convenient not only gives you a more generous, readable screen, it makes battery management so much better. Not just retaining a charge, but avoiding lifetime battery exhaustion of the sort that has been making the news.

A pad remains a great compliment to a smartphone and a desktop.

Since that original Apple purchase this household has added a lot of Apple to the mix, in every product line. My workstation is a Ubuntu/Windows box aside a Mac mini (a product that needs a refresh, though moving to an external USB3 SSD is elegantly simple). My main smartphone a GS8, but my fallback / dev device is an iPhone 8. There are great options in all spheres.

The Nexus 7 2013 is still running well (it is extraordinarily unfortunate that Google abandoned the line, jealous of all the lucre Apple was pulling in, failing to emulate it with the miserable, grossly overpriced Pixel lines), but while both devices are aged and out of the OS support window, the Nexus 7 has me much more hesitant to trust it among my children, which was validated when I put it away one night to find an R rated ad layered over the screen, using draw on top permissions (despite not having the permission, instead using an unpatched exploit), kicked off by a Minecraft-clone my son had installed. In a rational world this developer would be punted from the platform for eternity, but in the Google world they just need to pay $25 and be up on the platform again minutes later with another fake identity.

If Google Play goes without curation — which is a choice that has many merits to go with the downsides — at least offer the optional validation of the developers/publishers (whether individual or corporation) and allow users to limit installs to those that are verified. Make the threat of being kicked from the platform something that actually scares those who install shady advertising library from exploitative, garbage companies.

The Dirty Blockchain

Got forwarded this submission to Hacker News, seeking my opinion given my recent comments on blockchain related technologies (where I have become involved in some extremely high performance/low latency initiatives).

I try to avoid negative commentary, and negativity in general, but that submission struck me as curious. Enumerating every grievance of an entire industry and then assigning it to a single implementation/opportunity is not a productive approach.

The overarching blockchain concept, which applies to merkle trees, shared and cross signed ledgers, data transparency, etc, and does not necessitate proof-of-work, entails an enormous number of technologies, uses and implementations. It isn’t a magic cure-all (or even cure-many, and might actually inflict and infect many, being misused and abused), it is over-hyped, but it isn’t productive to discredit anything tangentially related. Buying into the anti-hype can be as misleading as buying the hype.

For years people have linked to some old pieces here for anti-NoSQL firepower. I never wanted to champion that cause (the cause of being anti-something, or of being defined by negativity). As with the blockchain, often people were trying to counter the hype, but in doing so perilously veered to an anti-position that was sometimes just as coarse.

It’s a very nuanced industry.

I Was Wrong on Intel

A few years ago I exclaimed that we shouldn’t count Intel out of mobile. I like to revisit those old statements and own my mistakes, and in that case I was completely wrong. [I also thought HD-DVD would beat BR, but in reality streaming beat both].

At the time I had viewed Intel as a remarkably capable company that was most afraid of competing with themselves, but that would stay at the edge of relevant in the space. The compete-with-themselves bit remains true, but they are completely irrelevant in mobile now.

In the void mobile chipmakers have made enormous advances at eventually competing with Intel’s cash cow.

The notion of Apple moving all of their devices to variants of their ARM processors is a very real possibility now: There remains a performance differential, but scale the chip up to the power profile of a desktop or a laptop, copy/pasting cores as necessary, and enormous performance is possible. The single core performance is already there, which is simply extraordinary given the traditional strengths of each variant, and the classic design philosophies guiding each architecture.

Intel’s missteps lately have been surprising, and they seem to be stumbling at everything they do lately. It has been a lost half decade, and they don’t seem to be recovering.

Kotlin and Swift

One of the reasons I picked up Kotlin on the Android (and general Java) side was that as a language it shares a lot of parallels with Swift, which I’d been embracing at the same time.

After several projects and many tens of thousands of lines of codes, just wanted to reaffirm that Kotlin is fantastic. It carries some warts from its JVM targeting foundation, but if you’re doing Android and you haven’t embraced Kotlin, you’re missing out.

On Editing Old Posts

I periodically edit existing posts to make them more concise and efficient for readers. Often this entails removing adverbs — it is remarkable how seldom the word very adds to a statement — and parentheticals. I have a bad habit of trying to cover possible counterpoints when I make a statement, yielding unnecessary wordy entries.

It’s worth making the time a little more beneficial for readers. One of the reasons I didn’t make use of podcasts early on was that most were often improvised with a lot of umms, casual banter, etc, offering little real content density to listeners outside of entertainment (which is now the case with many YouTube channels). Lately I’ve found a lot of fantastic podcasts that clearly have considerable preparation and research to provide a high value/high content audio experience  to awareness/education.

Things That I’ve Enjoyed Lately

Admission Requirements – wonderful book of poetry by Phoebe Wang

PlayerUnknown’s Battlegrounds – current diversionary game to quickly hop into for a short round, with a high-intensity conflic. Most rounds are 98% collecting things, 2% getting shot at from sources unknown. Lots of bugs, lots of hackers (especially in third-person perspective — stick to FPP), but the exhilarating game experience makes it worth a try

Cohen Live – an album that I return to every couple of years, wondering how I ever put it aside

The Problems of Philosophy – Enjoyable read by Bertrand Russell. The whole field of philosophy has fascinated me lately, and I also got great value out of the Kindle edition of The Philosophy Book (Big Ideas Simply Explained)

Virtually any Nordic / Euro show added to Netflix – Have loved so many of these.

On Specializing AKA Databases Are Easy Now

After hanging my shingle in the independent consulting business — trying to pay the bills as a financing for entrepreneurial efforts[1] – one of my original specialties was high-performance database systems.

I pitched my expertise primarily in large-scale high-demand database systems, providing solutions that would dramatically improve operational performance and relieve pain points. Just as a facet of my network of contacts this was usually in the financial industry.

It made for relatively easy work: Many groups don’t understand how to properly leverage the available database products, their data needs, or lever it to their query profile, so I could often promise at least a 50x performance improvement for some particular pain point, on existing hardware, and achieve that and more with ease. Everyone happy, achievement unlocked, etc.

I started backing away from the database world, however, because many database products have dramatically improved and reduced the ability to shoot one’s own feet, but more significantly hardware has exponentially improved.

From massive memory, caching the entirety of all but the most enormous databases, to flash storage (and now Optane storage) that makes the most inefficient of access patterns still effectively instant. Many have gone from a big expensive RAID array that could barely manage 1000 IOPS, on a platform with maybe 32GB of RAM, to monster servers with many many millions of IOPS and sometimes TBs of very high speed memory. Even the bottom feeder single-instance AWS services are usually fairly adequately equipped.

It has gotten hard to build a poorly performance database system, at least at the scale that many solutions operate at. Expert implementations can still yield significant speedups, but to many potential customers a 100ms query is effectively identical to a <1ms query, with low enough loading that hardware saturation isn’t a concern (or the hardware so inexpensive that a cluster of ten huge machines is fine despite being significant overkill if properly implemented). The most trivial of caching and even a modicum of basic knowledge (e.g. rudimentary indexes) is usually enough to achieve workable results.

Egregious inefficiency is operable. Which is a great thing in many cases, and is similar to our move to less efficient runtimes and platforms across the industry. There are still the few edge situations demanding extreme efficiency, but as with assembly programming it just isn’t enough to demand a focus for many.

The possible market for database optimization just isn’t as large or as lucrative. And when things are awry on that millions of IOPS system with 2TB of RAM and 96 cores, it’s usually so catastrophically, fundamentally broken that it isn’t really a fun pursuit, usually necessitating a significant rebuild that no one wants to hear as the answer.

Very large or specialty shops still have high performance database needs, but naturally those orgs have specialized internal staff.

Being an employee in a very narrow niche doesn’t interest me.

For the past half year I’ve moved primarily to cryptocurrencies and their related technologies (e.g. public blockchain implementations and standards, as well as payment systems). Not chasing a “whale” (or having anything to do with the enormous bubble of some currencies), or hoping to ride a bubble to great riches (I’ve traded $0 in fiat currencies to cryptocurrency — any that I have I’ve got through other mechanisms — and see the current market as incredibly dangerous and vulnerable to incredible variations on the slightest push), but rather becoming completely knowledgeable of every emergent standard, and of the currently dominant platforms and their source code, and offering that knowledge to individuals and firms looking to integrate, to leverage, or to bet. It has been rewarding. My specialization has been in high scale, extremely high performance and low latency solutions.

All the while I continue with the entrepreneurial efforts. My latest is with the iOS ARKit, which has been a fun experience. I should release something on that soon. I also revived a video stabilization project after spending some time on the machine learning side of that field.

Otherwise I continue with fun projects. I still have a contour mapping application on the go. It started as a real estate sales project, but that went so quickly to great success that it became just an interesting project to explore Kotlin, undertow, targeting iOS and Android and then learning about pressure propagation, etc. Then WebGL mapping, millisecond time synchronization (I use four current smartphones in concert), etc. I have a mostly complete posting on that, long promised, that I’ll finish soon.

[1] – I hit some financial roadblocks, leading to a crisis, when I put aside all revenue production that try to focus entirely on entrepreneurial efforts that seemed, for so long, on the absolute cusp of success. Lessons learned.

Glorious Fall In the Northern Hemisphere

Everything is spectacular right now, and I’m hard at work on some exciting projects. I hope to finish up a barometric land topography piece (an enjoyable side project that I mostly used to get very acclimated with Kotlin, and then decided to use as a vehicle to play around with the iPhone 8 and re-acclimate with xcode and swift) within the week and it should make for some interesting content, but aside from that I hope to be very prolific in my creations over the next while, having the time and attention to make the most of opportunities.

Thank you to all who have queried on the quiet — it will be worth the wait.

Embrace AMP or AMP Wins

I’ve written about AMP (Amplified Mobile Pages) on here a few times.  To recap, it’s a standard of sort and coupled set of code.

If you publish a page through AMP, you are limited to a very narrow set of HTML traits and behaviors, and a limited, curated set of JavaScript to provide basic functionality, ad networks, video hosting, metrics, with scripts hosted by the Google owned and operated You also implicitly allow intermediaries to cache your content.

If you search Google using a mobile device, links with a little ⚡ icon are AMP links that will be loaded from the Google cache, and by rule (which is verified and enforced) live within the limitations of AMP. You can’t claim AMP conformance and then resort to traditional misbehavior.

The news carousel is populated via AMP links.

Many publishers have gotten on the AMP bandwagon. Even niche blogs have exposed AMP content via a simple plug-in.

AMP is winning.

But it has significant deficiencies, for which it has earned a large number of detractors. There are technical, privacy and web centralization issues that remain critical faults in the initiative.

Anti-AMP advocacy has reached a fevered pitch. And that negative advocacy is accomplishing exactly nothing. It is founded in a denial that is providing a clear road for AMP to achieve world domination.

Because in the end it is a better user experience. Being on a mobile device and seeing the icon ⚡ is an immediate assurance that not only will the page load almost instantly, it won’t have delay-load modal overlays (Subscribe! Like us on Facebook!), it won’t throw you into redirect hell, it won’t have device-choking scripts doing spurious things.

Publishers might be embracing a Pyrrhic victory that undoes them in the end, but right now AMP makes the web a more trustworthy, accessible things for users. It is a better experience, and helps avoid the web becoming a tragedy of the commons, where short-sighted publishers desperate for a short-term metric create such a miserable experience that users stay within gated communities like Facebook or Apple News.

We could do better, but right now everyone has exactly the wrong approach in confronting AMP.

“We don’t need AMP: We have the powerful open web, and publishers can make their pages as quick loading and user-friendly as AMP…”

This is a losing, boorish argument that recurs in every anti-AMP piece. It is akin to saying that the EPA isn’t necessary because industry just needs to be clean instead. But they won’t. AMP isn’t an assurance for the publisher, it’s an assurance to the user.

AMP isn’t a weak, feel-good certification. To publish via AMP you allow caching because that cache host validates and forcefully guarantees to users that your content lives within the confines of AMP. You can’t bait and switch. You can’t agree to the standard and then do just this one small thing. That is the power of AMP. Simply saying “can’t we all just do it voluntarily” misses the point that there are many bad actors who want to ruin the web for all of us.

But the argument that as a subset it therefore isn’t needed — missing the point entirely — is self-defeating because that argument has short circuited any ability to talk about the need that AMP addresses, and how to make a more palatable, truly open and truly beneficial solution.

We need an HMTL Lite. Or HTMLite to be more slogan-y.

The web is remarkably powerful. Too powerful.

We have been hoisted by our own petard, as the saying goes.

With each powerful new innovation in web technologies we enable those bad actors among us who degrade the experience for millions. For classic textual content of the sort that we all consume in volumes, it is destructive to its own long term health. Many parties (including large players like Apple and Facebook) have introduced alternatives that circumvent the web almost entirely.

A subset of HTML and scripting. A request header passed by the browser that demands HTMLite content, with the browser and caching agents enforcing those limits on publishers (rejecting the content wholesale if it breaks the rules).

We need to embrace the theory of AMP while rejecting the centralized control and monitor that it entails.

This isn’t simply NoScript or other hacked solutions, but needs to be a holistic reconsiderations of the basics of what we’re trying to achieve. Our web stack has become enormously powerful, from GL to SVG to audio and video and conferencing and locations and notifications, and that is just gross overkill for what we primarily leverage it for. We need to fork this vision before it becomes a graffiti-coated ghetto that only the brave tread, the userbase corralled off into glittery alternatives.