Build One To Throw Away

Fred Brooks Jr. famously said that you should plan to throw one implementation away, because you will anyways. I remember hearing this at my first real professional programming job from a senior developer. It sounded defeatist and, well, wrong.

I was a green developer — this was, incredibly, two decades ago — and had notions that you just had to think long and hard, plan right, and the first design, and from that implementation, would be perfect. Forever would you be building a growing empire of great solutions, never looking back.

I was wrong. He was right. Or at least right in the sense that you need to constantly reassess what you’ve done and how you’ve done it, and throw away/rebuild where necessary. You aren’t building the entirety of your solution and then throwing it away, of course, but you’ll probably change the interfaces, the classes, the implementation approach, the formats, and so on, while you build the solution, likely multiple times. This is the natural state of most successful projects, versus the imagined approach where you’ll build a structure ever outwards, expanding on those superlative foundations.

Indeed, this sort of endless refactoring is a core tenet of agile — you need to always take time to go back and correct wrong assumptions, choices you’ve since learned were wrong, and re-balance your program (which might mean moving code to different execution units, changing interfaces, changing remoting and APIs and protocols, etc). And to understand and accept the reality that code really does, in a sense, rust: That changes in the industry and the problem space, and our experience and knowledge, might make that carefully constructed internal solution irrelevant if not self-destructive as time passes

cleaning.

Occasionally I come across projects where teams did everything possible to avoid this “be ready to throw one away” reality of software development. The result is always to the detriment of the project — arbitrary and unnecessary layering that no longer makes any sense at all, but it has been firmly cemented in. Overly vague interfaces (when you are dedicated to doing things once, often there is a desire to make it “versatile” to the point of absurdity. An interface with a single method that takes a dictionary set, used completely differently in every implementation), or worse the ridiculous “database within a database” design that every green developer eventually ends up implementing at least once (“the one table to rule them all means that our implementation is done. Never will we have to change the schema again!).

All of this came to mind as I work on a mobile application (one that will, I think, make pretty big waves quite shortly). The first implementation is for Android given that it isn’t served as well as iOS in this particular realm. I think it’ll be kind of a big deal. Enough so that already I’m starting to worry about the prospect of other developers working on the code, which currently is a bit of a hodge podge as I discover those features that don’t work quite as cleanly as the documentation might have you believe (deal with OES textures, media services, and the camera services and discover the fun and delights that each different device presents. You end up with code that is less than ideal as you adapt and patch and build out a robust solution. Add that during implementation the feature scope, the interface, and the functionality are all being driven by performance discoveries that I couldn’t possibly know in advance.

It ends up, temporarily, being pretty ugly code. But it’ll get me to v1.0, when I’ll worry about making it pretty enough to endure the scrutiny of others. That’s the reality of software development.