Edit and Continue – Valuable Tool, or Sloppy Vice?


The Expense of Errors

A widely demanded feature delivered with Visual Studio 2005
is “ "http://winfx.msdn.microsoft.com/library/default.asp?url=/library/en-us/dv_vsdebug/html/2aed9caa-2384-4e49-8595-82d8b06cf271.asp">Edit
and Continue
“; which is the ability to alter running
debug code to incorporate code-changes on the
fly: You’re debugging and realize that you initialized a
variable to the wrong value, or your loop control has a off-by-one
error. You pause the run, quickly hash out the changes, it builds
it into the running image, and you continue debugging from
where you were.

Great feature that can be a tremendous time saver,
avoiding having to stop the session, make the changes, rebuild
everything, and then begin the developer test session anew.

Is it, coupled with similar tool advances over the
years, making programmers generally sloppier, though?

Sloppy Programming Habits

title="Photo Sharing"> "http://static.flickr.com/9/77778773_3112ef1d59_m.jpg" width="160"
height="240" alt="IMG_2983" align="right" vspace="8" hspace=
"8" />

Observing the habits of many peer developers in the
field, I would say that it and similar
advances absolutely have made us more careless in
general: The less expensive errors become, the
fewer checks and mental effort we’ll expend ensuring
that they don’t get into the code in the first place. We’re
continually pushing the onus of catching errors one level
higher.

To contrast with a slightly earlier time, there was a time, way
back when (circa 1990), when I was plugging away with "http://en.wikipedia.org/wiki/DJGPP">DJGPP (GCC for MSDOS),
editing the source files in a simple DOS text editor, exiting out,
building (very time consuming, with few benefits like
precompilation), and then running. It was such an onerous,
expensive process that I put a significant amount of
care and concern into every single line of code. I would
follow-up careful coding by going back and auditing every single
function and interaction to ensure that it was syntactically
accurate, but more importantly that it was logically
accurate.

The cost of an error making it to the next level was
high enough that I was very motivated to avoid them in the first
place.

After such a personal code audit, I was very confident
in the code, and it was very rare that an error made it any
further: The cost of an error making it to the next level
was high enough that I was very motivated to avoid them in the
first place.
The original level of quality was high enough
that few additional checks were actually needed – it simply worked
correctly for all scenarios.

Of course I had it incredibly easy in
contrast to those who programmed before (I already had the benefit
of a significantly easier development process). I’m sure the folks
who programmed punch cards redoubled and tripled the effort again,
achieving amazingly high at-origin quality levels in their code:
You can’t just spit something out when you’re printing and sorting
punch cards, and then feeding it into the mainframe during
your tight allotted time window. Nor did programming assembly in
the 8-bit days leave much room for errors.

Contrast this with the habits of many developers today (myself
included at times): Spit out a bunch of code, occasionally hitting
compile/syntax check to automatically detect gross syntax errors.
Build and run, and if it blows up then follow the exception back to
the error and correct it. Drop into breakpoints and watch what
values are to ensure they’re what you wanted (a modern variation of
printf debugging), and if they aren’t then use edit and continue to
quickly hash in some changes. Keep debugging. Run the "http://en.wikipedia.org/wiki/Test_driven_development">TDD sets
to ensure that the superficial, incomplete collection of tests
“guarantee” that the code is “perfect”. 

Toss the result over the wall to the QA department. They’re
likely running a macro script that tests a small sub-section of the
code, so there are few guarantees there either. In the corporate
space, they’ll throw it over the wall to the UA testers who again
will likely only catch the most obvious of errors.

Deal with the inevitable problem when failures occur in the
field, pointing out their inevitability given the numerous layers
of quality control you have in place.

Of course some developers will strongly object to even the
possibility of such a scenario: Their code is flawless at
inception, crafted with the utmost of care and concern, and they
need never evaluate their habits or tool usage because they
couldn’t possibly come closer to perfection. That level of
ridiculous denial is destructive on any team or project, and I can
offer no advice on how to solve or manage it (though it’s the
foolishness of the inexperienced, so generally developers grow out
of such bravado with time). Instead I choose to deal in the
real world, with real developers on real projects in real
organizations.

Additional Checks Are No Guarantee

For all of the process (including layers of QA, UA, regression
testing, and so on), many errors aren’t caught at many shops until
the code reaches the field, which is why it’s critical that they
don’t enter into the code in the first place.

the addition of layers can paradoxically increase the
probability that errors will be introduced in the first place

Indeed, sometimes the addition of extra layers can
paradoxically increase the probability that errors will be
introduced in the first place
: At one very large organization
where I observed development firsthand, developers would hand their
obviously flawed code (it was clear that there wasn’t even
a superficial quality check) over the wall, doing so knowing both
that there was a QA department that should catch these things (and
if they didn’t then it’s their fault if it makes it further,
exonerating the developers even more), and if that department does
find a fault it came as a largely ignored problem report that held
few ramifications or negative implications.

Change precisely what was documented as defective, rebuild and
resubmit.

Eventually the QA department would pass on the code to the UA
department, which was a set of user testers that simply relied upon
the comforting idea that the developers and QA surely would have
found any possible faults. UA could be relied upon largely to
restate long-known system limitations instead of verifying the
changes.

All of these layers relieved developers, and each of the other
layers, of the real responsibility of defective code. Advanced
tools facilitated sloppy coding in the first place, and layer upon
layers of ineffective checks ensured that there was little actual
responsibility for faults that made it to the field. In the
corporate space where developers generally don’t have a passion for
the software they’re creating, the result was often of questionable
quality.

False Efficiency

It would be an interesting experiment to have two concurrent
mid-sized projects, each completing the same task,
with one development team having a modern complement of
development tools, and the other with no ability to automatically
syntax check, run automated tests, or debug in any way outside of a
small number of scheduled debug builds and test sessions. It would
be interesting to evaluate both the overall timeline (did the tools
save much time?), and the quality levels of the resulting
product.

I believe that the results would be very surprizing to
many software developers. In real world projects (e.g. not
pre-project timelines, but actual post-mortem results),
approximately one half of development is dedicated to finding, and
fixing, software faults. Making the per-item cost of faults cheaper
may reduce the per-fault cost, but it also might increase the
frequency of faults to the point of being a net loss.

A comment that I frequently hear relates to the efficiency
of development – That modern tools make us so much more
efficient
. Under the right conditions, and with proper
usage, this is certainly the case. Edit and Continue,
for example, could be a very useful feature once every blue moon.
Yet by the outpouring of demand for that feature, one would
think that developers were crippled by the inability to alter
running code: The responsibility to craft quality code before
hitting build was just too overwhelming. This is a sign
that quality code craftsmanship is on the decline.

Tagged: [ "http://www.technorati.com/tag/software%20development" rel=
"tag">Software Development
], [ "http://technorati.com/tag/programming" rel="tag">Programming],
[ "tag">Software-Development]