Software Development and Virtualization

IBM_Electronic_Data_Processing_Machine_-_GPN-2000-001881

After a recent development workstation failure, I decided to go all in with virtualization of my organization’s software development platform.

I was already doing this to some degree, using my workstation as the server: Instead of setting up a separate development environment on my laptop and other machines, I would simply work via a remote desktop connection to my workstation.

Connectivity is ubiquitous. It was extremely rare that this caused any issue at all — it is fully usable and responsive over large geographical distances, and even iffy, sporadic connections.

This is hardly novel, of course. Such a remote system model is the starting point for many who primarily work in Linux. When I want to do something new and novel on that platform I often simply spin up an EC2 instance, build the solution, and then save an AMI or back-up the essentials for deployment elsewhere.

With the failure of my workstation, however, I’m doubling down. I am virtualizing my team’s entire development platform in the same way that we virtualize the server platform. Never again will a team member install Visual Studio or Eclipse on a desktop or laptop.

There are a number of significant advantages to doing this-

  • Laptops, remote devices, and random client PCs hold nothing of consequence. Data security is one of the most pressing industry issues right now, so this is a really big win: Anything can become a competent, trustworthy development machine in in minutes, presuming that you implement multi-factor authentication and can minimize the impact of keylogging.
  • There is no coupling between physical machines and developers. Use whatever, wherever.
  • You can go all in on performance and reliability. Quadruple-redundant power supplies with a massive UPS and automatic virtual machine management, huge quantities of ECC, very high speed memory, RAID-6 storage platform with auto-tiering onto an array of PCI-E flash-storage devices. Extremely high speed cores aplenty. The whole platform redundant with a second server with hot and proactive migration.
  • Every workstation backed up regularly and incrementally. Add the ability to create and revert to snapshots on a moment’s notice.
  • Shared GPU resources.
  • Extreme network performance between virtualized workstations and servers. Equipping the core platform with aggregated and redundant 10Gbps connections is an easy, economical proposition, versus trying to do the same for every workstation.
  • Spinning up new, pre-configured images in moments. While you can do this on stand-alone workstations as well, the process is much easier to manage and keep up to date in a virtualized platform.

IBM_Electronic_Data_Processing_Machine_-_GPN-2000-001881

My laptop becomes, essentially, a dumb terminal. Yet I’m rolling with insane performance that I couldn’t come close to achieving with the most expensive, bulky, battery-exhausting super laptop. Indeed, because my laptop itself does little but act as a terminal, its battery seemingly lasts forever, the system completely untaxed.

I can as easily use any RDP client. iPad, Android device, Chromebook. Still I receive an obnoxiously speedy development environment.

The one time that I faced a situation where connectivity wasn’t assured, I exported the VM and hosted the guest on my laptop. Modern processors make virtualization a close to free layer, so the result is little different from running the platform natively on the workstation.

Everything old, as they say, is new again. The primary argument against such a model is the classic “back to the old mainframe model” retort. Yet that is no retort at all, instead trying to supplant rational discussion with era-related bigotry (e.g. we did something like that before, so therefore we can’t again).

It isn’t a model that works for everyone, but for us it has worked spectacularly well. It is the future, or rather the present, of platforms.