Edit and Continue – Valuable Tool, or Sloppy Vice?


The Expense of Errors

A widely demanded feature delivered with Visual Studio 2005is “Editand Continue“; which is the ability to alter runningdebug code to incorporate code-changes on thefly: You’re debugging and realize that you initialized avariable to the wrong value, or your loop control has a off-by-oneerror. You pause the run, quickly hash out the changes, it buildsit into the running image, and you continue debugging fromwhere you were.

Great feature that can be a tremendous time saver,avoiding having to stop the session, make the changes, rebuildeverything, and then begin the developer test session anew.

Is it, coupled with similar tool advances over theyears, making programmers generally sloppier, though?

Sloppy Programming Habits

IMG_2983

Observing the habits of many peer developers in thefield, I would say that it and similaradvances absolutely have made us more careless ingeneral: The less expensive errors become, thefewer checks and mental effort we’ll expend ensuringthat they don’t get into the code in the first place. We’recontinually pushing the onus of catching errors one levelhigher.

To contrast with a slightly earlier time, there was a time, wayback when (circa 1990), when I was plugging away with DJGPP (GCC for MSDOS),editing the source files in a simple DOS text editor, exiting out,building (very time consuming, with few benefits likeprecompilation), and then running. It was such an onerous,expensive process that I put a significant amount ofcare and concern into every single line of code. I wouldfollow-up careful coding by going back and auditing every singlefunction and interaction to ensure that it was syntacticallyaccurate, but more importantly that it was logicallyaccurate.

The cost of an error making it to the next level washigh enough that I was very motivated to avoid them in the firstplace.

After such a personal code audit, I was very confidentin the code, and it was very rare that an error made it anyfurther: The cost of an error making it to the next levelwas high enough that I was very motivated to avoid them in thefirst place. The original level of quality was high enoughthat few additional checks were actually needed – it simply workedcorrectly for all scenarios.

Of course I had it incredibly easy incontrast to those who programmed before (I already had the benefitof a significantly easier development process). I’m sure the folkswho programmed punch cards redoubled and tripled the effort again,achieving amazingly high at-origin quality levels in their code:You can’t just spit something out when you’re printing and sortingpunch cards, and then feeding it into the mainframe duringyour tight allotted time window. Nor did programming assembly inthe 8-bit days leave much room for errors.

Contrast this with the habits of many developers today (myselfincluded at times): Spit out a bunch of code, occasionally hittingcompile/syntax check to automatically detect gross syntax errors.Build and run, and if it blows up then follow the exception back tothe error and correct it. Drop into breakpoints and watch whatvalues are to ensure they’re what you wanted (a modern variation ofprintf debugging), and if they aren’t then use edit and continue toquickly hash in some changes. Keep debugging. Run the TDD setsto ensure that the superficial, incomplete collection of tests”guarantee” that the code is “perfect”. 

Toss the result over the wall to the QA department. They’relikely running a macro script that tests a small sub-section of thecode, so there are few guarantees there either. In the corporatespace, they’ll throw it over the wall to the UA testers who againwill likely only catch the most obvious of errors.

Deal with the inevitable problem when failures occur in thefield, pointing out their inevitability given the numerous layersof quality control you have in place.

Of course some developers will strongly object to even thepossibility of such a scenario: Their code is flawless atinception, crafted with the utmost of care and concern, and theyneed never evaluate their habits or tool usage because theycouldn’t possibly come closer to perfection. That level ofridiculous denial is destructive on any team or project, and I canoffer no advice on how to solve or manage it (though it’s thefoolishness of the inexperienced, so generally developers grow outof such bravado with time). Instead I choose to deal in thereal world, with real developers on real projects in realorganizations.

Additional Checks Are No Guarantee

For all of the process (including layers of QA, UA, regressiontesting, and so on), many errors aren’t caught at many shops untilthe code reaches the field, which is why it’s critical that theydon’t enter into the code in the first place.

the addition of layers can paradoxically increase theprobability that errors will be introduced in the first place

Indeed, sometimes the addition of extra layers canparadoxically increase the probability that errors will beintroduced in the first place: At one very large organizationwhere I observed development firsthand, developers would hand theirobviously flawed code (it was clear that there wasn’t evena superficial quality check) over the wall, doing so knowing boththat there was a QA department that should catch these things (andif they didn’t then it’s their fault if it makes it further,exonerating the developers even more), and if that department doesfind a fault it came as a largely ignored problem report that heldfew ramifications or negative implications.

Change precisely what was documented as defective, rebuild andresubmit.

Eventually the QA department would pass on the code to the UAdepartment, which was a set of user testers that simply relied uponthe comforting idea that the developers and QA surely would havefound any possible faults. UA could be relied upon largely torestate long-known system limitations instead of verifying thechanges.

All of these layers relieved developers, and each of the otherlayers, of the real responsibility of defective code. Advancedtools facilitated sloppy coding in the first place, and layer uponlayers of ineffective checks ensured that there was little actualresponsibility for faults that made it to the field. In thecorporate space where developers generally don’t have a passion forthe software they’re creating, the result was often of questionablequality.

False Efficiency

It would be an interesting experiment to have two concurrentmid-sized projects, each completing the same task,with one development team having a modern complement ofdevelopment tools, and the other with no ability to automaticallysyntax check, run automated tests, or debug in any way outside of asmall number of scheduled debug builds and test sessions. It wouldbe interesting to evaluate both the overall timeline (did the toolssave much time?), and the quality levels of the resultingproduct.

I believe that the results would be very surprizing tomany software developers. In real world projects (e.g. notpre-project timelines, but actual post-mortem results),approximately one half of development is dedicated to finding, andfixing, software faults. Making the per-item cost of faults cheapermay reduce the per-fault cost, but it also might increase thefrequency of faults to the point of being a net loss.

A comment that I frequently hear relates to the efficiencyof development – That modern tools make us so much moreefficient. Under the right conditions, and with properusage, this is certainly the case. Edit and Continue,for example, could be a very useful feature once every blue moon.Yet by the outpouring of demand for that feature, one wouldthink that developers were crippled by the inability to alterrunning code: The responsibility to craft quality code beforehitting build was just too overwhelming. This is a signthat quality code craftsmanship is on the decline.

Tagged: [], [],[]