An oft referenced problem in the Windows world is .DllHell (*). It occurs when many applications depend upon thecode in a shared .dll (a dynamic link library, which is basicallycode that is linked at runtime rather than compile time), an oftenideal scenario given that you can upgrade security faults inone single location rather than recompiling and distributingstatic linked library using applications, or searchingfor disparate private copies scattered across volumes.Problems start to happen, however, if the dll is changed in a waythat breaks some of the dependent consumers (for instance one ofthe applications rolls out a new version that changed the externalAPI), causing inconsistencies or outright failures in otherapplications.
[* – Sidenote: The Wikipedia article linked from DLL Hellclaims that the term was “introduced to the general public by RickAnderson” in 2000. This is, of course, complete and utter nonsense– it was a verycommon piece of terminology many years earlier, and anMSDN article hardly introduces it to the “general public”. I comeacross these sorts of historical revisionisms on Wikipedia fartoo many times. Is it a Wikiality?I suppose I “introduced SVG to the general public” when I wrote a”paper“for MSDN Magazine, so I should go claim my crown…]
While the problem already existed for classic-code dlls thatwere stored in a shared location (usually for space-savingreasons), it really became a problem with OCX/COM, wherethe activation architecture basically demanded that you use theshared copy.
In spirit, similar problems occur even with high-level platformssuch as Apache, or even just modules like PHP, where a versionchange can break a lot of applications that run atop it or dependupon it, causing significant heartache, and making deploymentissues much more complex (particularly when you have multipledependent applications, some of them more adaptable than others).
There have been many declarations of an “end to DLLhell!“, with Microsoft pushing various approaches andstrategies, to varying success.
With .NET, the solution is generally “share nothing”, to thepoint that even the various versions of the .NET runtime exist asislands, with a .NET 2.0 application having all of its librarieslocal (often version linking, so if the same library exists in manyapplications, but the versions differ slightly, it will be loadedseparately and mapped individually), even if they’re componentsused by dozens of applications, using the .NET 2.0 framework islandand runtime, while a .NET 1.1 application exists in its own littleworld, and the same for a .NET 1.0 app. There still exists aclassic “shared activation” model via the global assembly cache(GAC); however it’s a little used bit of infrastructure.
Storage space is incredibly cheap, and memory space is becominga non-issue, so this sort of approach has a lot of merit.
Why not take it a level higher? With massivelypowerful servers, seemingly endless memory, andfree virtual server products (from bothVMWareand Microsoft), we’re entering an era when it is entirely possible,and often ideal, to release your product as acomplete virtual server.
Of course, I’m repeating myself now, but this idea really appeals to me.
Some time back, for instance, I was considering making acommercial, corporate web application timesheet tracking system(I’ve made some of these before. One particular one – an AJAXishDHTML solution I made back in the late 90s – I still think beatsout most of what I see today), however a hosted model wouldn’t flywith most customers given the amount of information that could begarnered from their timesheets: Many customers would want to hostit themselves. Yet then you face the dilemma of releasing a productthat can exist within their current architecture and skillset, aparticularly onerous task given the many dependencies of a modernweb application.
Inevitably you’d be putting yourself out of contention for a lotof customers because you used X instead of Y, and would beendlessly fielding support issues when their platform changedfaster (or slower) than your application did.
So why not release your web application (or any type ofapplication) on an “appliance” virtual machine, as it’s now gettingnamed? The same goes for application “consumption”: If you’re aWindows shop, instead of hosting your wiki on Windows, or far worse limiting yourchoices to the small selection of options that exist for yourparticular ecosystem of dependencies, perhaps you could justdeploy a Wiki appliance with the perfectly idealconfiguration of database server, web server, host operatingsystem, and modules.
Configure your appliance to only allow port 80 traffic in (orbetter yet work on an appliance platform where the accessible portson each virtual machine can be configured, perhaps by a separate”firewall” virtual machine), and live in an application model, withwhatever version of MySQL, Postgresql, or Apache you want, customconfigured in a way that perfectly matches your requirements.
Virtual machines have so many advantages, not the least of whichis the ability to move them between hardware with minimal hassle.Indeed, I had exactly this scenario recently, where the TeamFoundation Server application tier was running on a box that wasgetting a little overloaded…well it was just a virtual machine,so it was nothing more than pausing the state, moving it to anothervirtual server hosting box, and starting it up. This balanced theload better, and was completely transparent to the users.
There are downsides that would have to be taken into account -some shops might want a better backup solution than pausing thevirtual machine and archiving the entire virtual hard drive(which is, I should mention, a wonderful capability — theentire “machine” in one single, relatively small file,atomically copyable and restorable. In development I’ve usedthis endless to save various platform configurations, restoring toexactly the one that is pertinent for a particular need), howeverthere are endless possible, application specific solutions tothis sort of problem.
There’s also the issue that Microsoft doesn’t take kindly toreleasing virtual machines based on their software, so perhaps thisis a model that works best when the software you’re depending uponis freely distributable (within the confines of the license).