A Blueprint-Driven Implementation
Sun’s “Pet Store” eCommerce application, released back in theearly-2000s, was intended to be a blueprint of a best-practiceconforming J2EE application. Microsoft took that blueprint andcreated a .NET simile, using it to highlightpurported performance advantages of .NET.
Much ink was wasted arguing the merits of the comparison, withvarious players optimizing and re-engineering the entrants.Eventually a more equal comparison was created by a J2EE consultingcompany (The Middleware Company, which has since discontinued andfolded into other operations), who published an optimized Javaimplementation, while Microsoft did the same with the .NET variant, the two facing off in the cage.Microsoft’s platform still came up the winner, though manyargued that the Middleware Company acted as a stooge forMicrosoft.
There was still a lot of complaints — no one is ever satisfied with a benchmark unless their favoritewins — but it was a somewhat fairer fight.
We need to revisit that model with the current available stackof technologies. A Pet Store in Ruby on Rails atop MongoDB, versusa .NET over SQL Server and a PHP over MySQL, and so on, allfeaturing the same public RESTful interface and APIs, accommodatingthe data needs however is seen fit.
Empirically measuring actual efficiency and performancerather than simply taking a lot of hot air and evangelism as asurrogate, which sadly is where we are today.
PHP Is Fast? Since When?
A CEO recently exposed their ignorance to the world in an essayexplaining why they don’t hire .NET programmers. To saythat it was universally condemned is a fair assessment. Evenamongst anti-Microsoft camps there is a lot of appreciation that.NET is actually a pretty good platform, definitely holding its ownamong the top contenders, and that the author was simply a misledbigot (who, I suspect, thought that such bigotry would play to thecrowd. Maybe in 2005 when Microsoft was the boogeyman andeverything they touched pervaded evil, but not in 2011 when they’rethe underdogs).
I was pleased with the response he got, but some off-the-cuffcomments surprised me. Some offhandedly commented on .NET’s supposed poorperformance relative to PHP, more than a few calling PHPfast.
PHP is fast? Since when? I’ve been dealing with PHP and .NETcode (among others) in the stack for years, and the one adjective that I wouldnever use for PHP is fast. With various accelerators and hacks itcan be made workable with a big enough scale-out, but it is not anefficient platform. Choosing PHP as a general rule means slowerpage generation times, and with growth a larger and largerscale-out need.
It still powers many of the top sites today (and enabled theirgrowth), so clearly it has a lot going for it, but speed is not oneof those things.
The problem with PHP is that it suffers from a Schmlemielthe Painter inefficiency, with a “start the world” processingmodel that, in a reasonable sized application, does an astoundingamount of work for even trivial requests as your application basegrows (see the SugarCRM).It’s for this reason that a “Hello World” PHP demonstration is soterribly misleading and has little applicability to a real-worlduse.
PHP offers a platform that is difficult to optimize because offundamental implementation decisions early on, where any include ofan include can, at any point in the execution flow, change thestate of the world, making it difficult to devise any strategy thatshares the pre-executed state among requests.
PHP is slow.
Of course there is Facebook’s “HipHop”initiative, which is a code generation utility that uses a subset of PHP as an input, the output being C++ (Wasabi!), which is then compiled into native code. Todo this Facebook had to follow a number of practices that limitedthe ability of PHP to sabotage its own performance, making itlifecycle compatible with such a transformation. The end result isnot PHP, however, and it does not practically carry over to anyother site.
It’s worth noting that several very large sites used C++authored ISAPI modules for the bulk of their processing, so it canand has been done directly many times before.
We Need Some Model Benchmark
When I engaged in the whole NoSQL debate previously ( ), one of the primary complaints I had about the NoSQLmovement was the shocking lack of actual empirical metrics. Lots ofbroad claims were being made, but remarkably little was actuallydemonstrated. Just lots of unsupported claims about performancethat didn’t pass basic skepticism.
Of course it isn’t all about per request performance or runtimeefficiency. Development efficiency is critically important, and aproduct like MongoDB can be a hugely efficient development target.But isn’t it better to work with the real numbers, making decisions on fact instead of emotion?
It would be ideal to agree to some common interface,functionality and API patterns of some model applications(eCommerce Pet Store 5.0, social news Reddit simile, Twittersimile), and then let the evangelists loose to create test-passing implementations in their platform of choice.
The benchmark platforms can then be evaluated for performanceand for efficiency (Reddit runs a monthly cloud server charge ofsomething like $35,000, with a result that they have a terribly unreliable, often unresponsive site. Efficiency is *hugely* important even if it seems cheap at the outset to throw hardware at the problem), and operational resiliency. Most importantly they could be evaluated from a security perspective, which is a grossly ignored aspect that hasreared its head time and time again, in some cases destroying organizations because they eschewed basic security practices.
Demonstrate that your advocated solution serves up pages quickly and efficiently, on a resilient, secure platform, and win the argument.