Joel the Troll?
Joel Spolsky, the well-known blogger and ISV owner, kicked up quite a storm recently with his piece entitled Language Wars.
The article leads off with some pragmatic wisdom, advising enterprise-y, low-risk type shops to use well-known and well-proven technology stacks — solid advice that’s hard to argue with — yet he then ends the piece with a comment about an in-house, next-generation, super-duper language being used to develop FogCreek’s premiere product, FogBugz.
The discord was so great that most readers presumed that the Wasabi thing was a joke, or alternately that the rest of the article was the joke (which would have been an awesome revelation). Much confusion ensued, to the point that Joel had to put up a post clarifying that he was actually serious about the Wasabi thing
Like Sharks, only with Ruby LASERs On Their Heads!
Aside from the seeming hypocrisy, what really instantiated some JoelCritic<T> instances (via the BlogCriticFactory) were Joel’s comments about Ruby, where he seemingly indicated that it wasn’t ready for prime time.
…but for Serious Business Stuff you really must recognize that there just isn’t a lot of experience in the world building big mission critical web systems in Ruby on Rails, and I’m really not sure that you won’t hit scaling problems, or problems interfacing with some old legacy thingamabob, or problems finding programmers who can understand the code, or whatnot…
…I for one am scared of Ruby because (1) it displays a stunning antipathy towards Unicode and (2) it’s known to be slow, so if you become The Next MySpace, you’ll be buying 5 times as many boxes as the .NET guy down the hall.
I’m sure Joel anticipated the backlash. Perhaps it was even the motivation behind the posting: The resulting torrent of discussion brought quite a few visitors to his blog, and earned him a lot of inbound links, both of which have definitely helped with his new business ventures. No publicity is bad publicity, they say, especially if it’s timed to coincide with the launch of a new job board.
Ruby is still new enough, and with a small enough community, that many of its users double as evangelists — think of the Amiga computer, the BeOS operating system, or any other contextually-superior alternative embraced by a small enough group that many feel an ego-intersection with the technology, motivated to defend and advocate it when the opportunity arises. Linux once had such an attack-dog core of rabid enthusiasts, though as the user base has grown, and it has become more pedestrian, you really have to target a Linux-niche (such as a little used distro) if you’re aiming to stir up a hornet’s nest.
That entire lead-up was just some context for the actual topic of this entry: So-called premature optimization.
On Premature Optimization
A common response to Joel’s complaint that Ruby is slow or resource inefficient is the frequently incanted declaration that such complaints are nothing but “premature optimization!”
I’ve seen the same deflection shield used to defend abhorrent database designs, convoluted, overly-abstracted class designs or message patterns, and virtually anything else where a realist might proactively ponder “but won’t performance be a problem doing it like this?“, only to yield the response “You know, premature optimization is a classic beginner’s mistake!”
If you don’t want to be lumped in with beginners, the lesson goes, it’s best to pretend that performance simply doesn’t matter. We’ll cross that bridge when we get to it.
Premature optimization is the root of all evil (or at least most of it) in programming.
I remember the early days of my software development career: I once spent about 16 work hours optimizing a date munging function, increasing its performance from something like 2 million iterations per second to 4 million iterations. In the grand scheme of things the performance difference was completely negligible, but from the perspective of artificial benchmarks it seemed like tremendous progress was being made.
That was premature optimization.
Indeed, anyone who’s done time in the software development industry can identify with what Mr. Knuth was saying, probably having been involved with (or responsible for) project plans gone awry when efforts focused on highly-complex caching infrastructures, or ultra-optimizing some seldom used edge function.
Yet what is arguable, and situation specific, is deciding what qualifies as premature, versus what is simply proactive, predictive, professional performance prognostications.
NOT ALL PERFORMANCE CONSIDERATIONS ARE PREMATURE OPTIMIZATION!
While there is no doubt that there is such a thing as premature optimization — it is an evil distraction that sidetracks many projects — there are critical decisions made early in a project that can cripple the performance potential (both resource efficiency, and resource maximum), making later optimizations enormously expensive, if not impossible without an entire rewrite.
Whether it’s heavily normalizing the database (or its nefarious doppelgänger, the classic database-within-the-database: “This single table can handle anything! Just put a comma separated array of serialized objects in each of the 256 varbinary(max) columns! Look at the flexibility! Query it Don’t you bother me with your premature optimizations!“), creating an application design that’s incongruent with caching, or choosing an inefficient platform.
There are credible performance considerations that need to be addressed at the outset, and revisited as development proceeds. It is absolute insanity, and entirely irresponsible professionally, to simply stick one’s head in the sand and hope that some magical virtual machine improvements or subcolumn indexing decomposition and querying technology will occur before deployment, or before the economics of scaling come into play.
And speaking of scaling, the canard that the horizontal-scalabilty intrinsic with most web apps (unless you really screwed up the design — as many people do — and made horizontal scalability impossible) makes the problem a nonissue is absurd: Perhaps if your project has a high transaction value then you have the luxury of adding more servers to serve a small number of clients, yet for most real-world projects adding resources is a big, big deal. And it isn’t simply the cost of a low-end Dell 1850: Whether you’re colocating or hosting in an expensively rigged corporate server room, the cost of each server is substantial.
You end up in the dilemma that you’re financially (or physically) limited to a set quantity of resources, having to limit or scale-back the functionality provided to each user due to the inefficiencies caused by early decisions. “Sorry we can’t implement that cool AJAX type-ahead lookups because the callbacks would kill our servers – we’re already saturating them with our stack of inefficiency, so there’s no overhead left.”
I think the lackadaisical attitude towards efficiency is a result of experience derived from countless unvisited or seldom used web apps deployed across millions of PCs, colocated with equally as spartanly used peers. When a site sees a dozen visitors in a day, it’s easy to declare that performance is a seeming nonissue nowadays – that it’s only a concern for game programmers and nuclear modelling engineers. Then one day the page gets mentioned on Digg or Reddit or Slashdot or BobOnHardware and in that potential moment of glory the app falls over and dies, again and again.
None of this really has anything to do with Ruby. Personally I haven’t used it beyond the tutorials, though I do know that it does very, very poorly on the standardized benchmarks. However it is distressing seeing so many people dismiss Joel’s comments (or comments about Python, or ERlang, or XML, or any other technology) as premature optimization.