The Most Destructive Metric in Software Development

Software development is a difficult task to meter.

It’s not for lack of trying.

For decades consultants have been evangelizing methods which, they claim, would allow an unskilled, casual observer to easily measure and compare productivity in a contextually agnostic way.

Their ultimate goal: To allow a drop-in manager, with only a superficial knowledge of the activities, skills, and complexities of a task or project, to easily compute metrics by which to dole out the frequency and intensity of whippings and rewards.

[Aside: Before anyone incorrectly presumes any of this is critical of software development managers as a group or individually, realize that it is nothing of a sort: I start with a brief analysis of the goal of such simplistic measures — most organizations would like positions, including management, to be lower-skill and easier (cheaper) to fill, and such a simplification of the role is definitely in their interest, just as many dream of the panacea of no-skill, factory-type software development — and then actually question the fact that developers themselves are often guilty of quoting these metrics. 9 times out of 10, developers have only themselves to blame for a lot of the problems with the profession. This is not yet another boring us-versus-them war cry-pandering piece, like those that top the meme charts frequently]

February ConsultaMark(SM) ProductoMatrix(TM)Results
Cog Output Proposed Action
Tom 117.6 2% Raise At Year End
Amy 111.2 1% Raise At Year End
Jacob 92.7 Forced Overtime
Serene 85.5 Replace LCD with the 14″ VGA monitor from the server room
Nellis 68.0 Creative Dismissal

The same methods — if they worked as promised — could be used to chart project progress (“We’re 7868.2 ConsultaMarks towards the 11273.9 estimated for the entire project!“).

Instead of relying upon the from-the-trenches observations of Randal the development group manager — a grizzled vet of software development who manages with a hands-on style by becoming intricately aware of the domain challenges and unique contributions of each team member — Lynn, the parachuted in middle manager, wants some simple numbers that can be sorted like her mutual fund returns, giving her some available sacrificial lambs when the next diversion-from-massive-executive-fumbles headcount reduction comes due.

Many proposed solutions have come and gone, with the most persistent being the infamous SLOC (Source Lines of Code)/LOC measure.

Source Lines Of Code


SLOC, if you haven’t been afflicted with it, is an easily computed count of the number of lines of code in a given project/component/object (although first you have to agree on the definition of a “line of code”, and this is a point of debate among SLOC champions). It’s often used to count the number of lines of tested, complete code added by a particular contributor (easily accomplished with many source code repositories), allowing for the easy creation of nice little charts like the one above.

SLOC does have some quasi-legitimate uses: Given a common programming language and domain complexity, SLOC magnitude differences have a moderate correlation with general project size, and at the method level it is a rough indicator of gross complexity (see the article FxCop & Cyclomatic Complexity for a discussion of a loosely related metric, which is the number of intermediate language instructions generated from a method).

Applied at the individual or group level, usually as a cheap substitute for good management and project awareness, SLOC measurements are likely to encourage very destructive behaviors: Copy/paste coding, limited reuse of existing code found elsewhere in the organization and the industry, little motivation to prune code where necessary, overly convoluted coding, motivation for employees to only take on trivial coding tasks, and so on.

The Lemon Slice Lemon Roast

Envision a system that ranked cooks by the number of lemons they use to provide a restaurant’s service each night: You’re going to end up with a lot of dishes featuring copious stacks of lemons, even if ultimately it compromises the quality and organizational health of the establishment. While in some situations you could conceivably roughly compare overall restaurant success by the number of lemons they go through in a period, the comparison only holds true if all else remains equal (e.g. if otherwise the restaurants are very comparable, such as two restaurants serving Thai food): A deli restaurant might use very few lemons despite a healthy customer turnover, where an equally successful Greek restaurant might go through hundreds.

Far more logical would be to measure the number of dishes served– while still imperfect, it would be much more useful than the LemonMetric. There is no comparable measure, with a similar level of granularity, as “dishes served” in software development (don’t even think of mentioning the highly ambiguous “function point” metric as a simile).

Preaching To The Absentee Choir

Geez…we all know that there are significant problems with the SLOC metric!” many will inevitably retort. “This is old news. You’re preaching to the choir!

…but having said that, I saw a recent article that claimed that the average developer does {X} lines of vetted code a year. Are they really that slow Me and my team must generate at least 20{X} a month! I hear that some superstars are responsible for 200,000 SLOC a year. They must be awesome!

Comments just like that are probably being typed into a TEXTAREA at this very moment.


Why do so many comments about productivity — even in the comfort of secret No-PHB hideouts — inevitably elicit gloating commentary about personal SLOC accomplishments Why do we hear gushing superlatives about the “superstars who push out 100s of thousands of SLOC a year”?

Why do so many in this industry perpetuate this destructive myth?


Let me flip this metric on its head, and state that, if anything, for a certain domain of project, and a certain class of developer, a high rate of SLOC can actually indicate poor programming practices.

In the nascent days of software development, many teams had a compiler or an interpret and that was pretty much it. They were responsible for building the majority of functionality from scratch. The pace of SLOC creation was tremendous (especially given that much of that implementation was trivial, allowing them to code as fast as they typed. Little time needed to be spent problem solving or planning: It doesn’t require a superstar to code yet another string copy function).

As time went on, organizations compiled volumes of reusable internal code for all of their domain specific problems.

From an individual developer perspective, no longer was it acceptable to simply “run and start coding”. Now you had to spend some of your time learning, assessing, and implementing shared internal code in your projects.

And it wasn’t just in house: The frameworks and libraries provided with our tools have been growing by leaps and bounds, immediately solving a huge range of traditional problems and tasks with well tested, robust, feature rich solutions.

In the industry as a whole, code sharing has become widespread, with excellent code being available for virtually all common (and even uncommon) tasks.

So many solutions are available in the industry and supplied within our libraries/frameworks, that even organizational code reuse can be indicative of a problem.

Yet somewhere out there someone is hand-writing an FTP client implementation. Somewhere developers are wasting a tremendous number of man-hours by poorly, and unintentionally, duplicating code that exists in the frameworks and libraries that they’re already using, or which can be easily found in license compatible open source projects.

Not Invented Here

A part of the reason for this is laziness — it’s a real bother having to look through the documentation and among search engine results, and that’s hardly as much fun as just coding. Another part of the reason is a classic perception flaw that virtually all developers suffer from: Endless optimism about the capabilities and quality of the code we produce — which we always think we’ll finish much quicker than we really will — coupled with an unreasonable pessimism about the applicability or worth of code we could source from another group in the organization, or from an external source. How could it ever compete with our imaginary idealized solution?

I’m often guilty of these failures of perception, as are the overwhelming majority of developers.


Rarely does a developer actually tread across new ground (and I’m certainly not just talking about business back-end “CRUD”developers — even in signal processing, embedded development, game development, and other less common branches of software development, most of the “solution” is the integration of existing work in novel ways, adding an envelope and fa├žade of customization).

For the rest of us, our job is partly to generate the generally small amount of niche-specific code, usually aiming to build it with the most concise — aka minimal — code necessary, with the bulk of our time being in the analysis and integration of the extraordinary volumes of available solutions.

Where niche, custom code is necessary, generally it will be fora non-trivial task, and the SLOC pace will be unavoidably glacial.

For the overwhelming majority of developers in the industry, the only value of SLOC measures is as a warning sign, not an indication of progress.