The Full Stack Developer / Computer (Un)science

I have nothing technically interesting to discuss right now, but various half-finished, lazily conceived thought essays have sat around for a few weeks, so here they are in somewhat rough form. These are not viable for general consumption — being wordy and contentious and not terribly interesting — and are not intended for social news (if you found your way here from social news, click back and maintain your innocence), but are contemplation pieces for existing readers looking for a bit of time filler.

typewriter-1004433_640

As with most narrative-style content on here, this is very subjective. I expect that many disagree with some of the suppositions, and of course welcome disagreement. You can even send me an email if so inclined.

The Full Stack Developer

Small shops and rapid growth startups need full-stack developers.

They need people who can ply the craft, with motivation and self-direction, all the way from the physical hardware (including VM provisioning and automation as appropriate), to the networking, OS, application server, database, API and presentation level, selecting technologies and securing appropriately. They need practitioners who might jump between building some Go server-side business logic — clustering the backing databases and applying appropriate indexes and optimizations to maximize the TPS — to working on the Angular presentation JavaScript, to building some Puppet automated deployment scripts and researching some new AWS products to see how it impacts the plans, then rebuilding nginx from the source to include some custom updates to a module, install some SSL certs configured for an A score and PFS, add some SIMD to some calculation logic with dedicated source variations for NEON, AVX and SSE. Maybe they’ll jump into Xcode to spin some Swift for the new iOS app, and then develop some ETL tasks to pull in some import data. Then they relax and collaborate on some slides for an investor proposal until they’re pulled aside to integrated DTLS into an Android app that was using unsecured UDP.

Some definitions of “full stack” add or remove layers — at some shops, simply being able to make a stored procedure in addition to the web app qualifies — but the basic premise is that there isn’t the traditional separation of concerns where each individual manages one small part of their project. Instead everyone does essentially everything.

Full Stack Rock Pile

Having full stack developers is absolutely critical when the headcount is small (or if you’re doing your own venture) and you just need to Get Things Done. You need people who aren’t constantly spinning while waiting for various other people to do things that they depend upon but can’t do for themselves, throwing their arms up under the comforting excuse of Someone Else’s Problem.

But doing full stack development in practice sucks. It is sub-optimal on the longer timeline. It’s an emergency measure that should not continue for longer than absolutely necessary. It can be a crutch for staffing issues.

Each context switch imposes a massive productivity penalty. It’s bad enough when you’re working on some presentation code and you get distracted by a meeting, losing your train of concentration and mental context. Now consider the penalty jumping from presentation code to an entirely different language, platform, and paradigm to put out a small fire, then returning back to where you were.

When voluntary and controlled, these sorts of project gear shifts can be hugely rewarding and beneficial (many developers hate working constantly on the same thing, growing malaise and carelessness from boredom), but these redirections are productivity sapping and stressful when they regularly occur as crises or due to poor planning, or simply because the headcount is so small that you have no other choice. It may be critical to the firm, but it’s detrimental to focused productivity.

Such is the life of the full stack developer.

I’ve done the full stack thing for much of my career. Not always because the headcount was small (though I do prefer to work on small teams), but sometimes just due to knowledge gaps elsewhere in organizations: When software developers flame out they often get moved to network administration / engineering, or database administration. With the right motivation and aptitude it can be a great fit, but often it’s just avoiding some temporary unpleasantness by making everything more of a hassle for everyone else as someone acts as a gatekeeper for a role they’re ill-suited for and unmotivated to learn competency in.

When the “DBA” only knows how to backup and restore a database, but imposes themselves as overhead on every activity purely as a turf defensiveness, and the network admin has a seeming strategic incompetence forcing you to do everything yourself (being the lead architect of an organization, with the top salary in the firm….fixing Exchange issues), it’s just bad for everyone. Everything gets a little worse and a lot slower.

Being a full stack developer imposes an enormous amount of overhead trying to stay on top of everything covering the entire domain, and it’s simply impossible to know it all comprehensively. So every area, by the simple limits of time, will be compromised to some degree. Many “full stack” implementations have betrayed this reality, with security rife with amateur mistakes, or a gross misuse of the database (which is incredibly common. If your database is a recurring problem point and you’re searching for silver bullet fixes, it’s more likely the developers that are the weakness).

There is no way I can coerce and optimize AWS as much as someone focused on it. Or monitor and tune and finesse the database to the same completeness of someone dedicated to doing that in a project where it might comprise 5% of my time. Nor can I spend the time to analyze every facet of security that someone focused purely on intrusion evaluations can. There is only so much attention and mental focus to go around.

While a developer tasked with the API might build out a well planned, focused, coherent API, the full stack developer is busy adding things on a need basis, tightly coupled to their specific immediate needs.

And forget about documentation. Or comprehensive tests. Did you notice I just automated the cluster deployments and their monitoring, configured the IPSec intra-machine security, and built the shared-memory module for nginx? And then I dealt with the consequences of the messaging server failing, after migrating the presentation code from Angular to Angular 2. And you’re bugging me about documentation? Let me just finish up that disaster recovery solution first.

I do this because I have to, and because I get paid to do it (often on projects at a state where a very rapid but focused build out is necessary), and it’s a necessity of circumstances unique to the roles I fill. But on a long enough timeline the survival rate for everyone drops to zero you should have dedicated people focused on getting the most out of as granular of domains possible, the focus intensifying as the team grows. If you have two dozen generalists all doing everything, you chose the wrong path somewhere along the way.

Though let me add the caveat that as people specialize, they must become actual experts (in knowledge and practice and not just in theory) who provide service levels and responsiveness. Not just flamed out dregs filling a seat, acting as conceptual gatekeepers while imposing lengthy useless delays.

Oh how I dream of having DBAs would actually alert me to database hot points or suggested optimal indexes, partitioning schemes and usage patterns that would improve performance. Or to have some security experts actually kick the tires and look at the protocols in depth and give a better sense of comfort, instead of just that clichéd “some guy who earned the benefits of the Dilbert principle and now imposes multi-week delays on your project because they’re the security `gatekeeper’, but whose analysis will be so superficial that it adds no utility or value” (true story! That was at a large banking group, as an aside, and was the glorious cargo cult illusion of security. The more onerous and inconvenient, the more the illusion of security was realized).

Get actual experts specialized on working together for a common solutions, with common motivations. Not generalists focused on making their presence known in minor turf wars.

Having said all of that, some shops ask for full stack developers but they actually want you to specialize. Meaning that they want developers to understand the workings of modern hardware, how the operating system functions, how the network, database, proxy server, application server, and all of the parts of the platform works. And then they want you to focus on your specific domain and solution with that knowledge in the back of your mind, considering cache levels and their impact on performance, the overhead of I/O and network communications, and how UDP and TCP and sliding windows impacts your work. How to make vectorizable code, and how what you’re doing impacts other projects, etc. The basics of the major facets of encryption (symmetric, asymmetric, elliptic curve versus RSA, the modes of each, etc). Facebook is the poster child for demanding this, and I have absolutely no criticisms or complaints with that. Their full stack developers aren’t really full stack developers.

An expectation that developers understand the consequences of their design choices, based upon a good knowledge of the platforms and systems they’re developing on, should be universal.

Computer Unscience

The nutrition and software development fields have a lot in common.

In both, flawed/incomplete dogma make the rounds and headlines. “Studies”1 — often agenda driven — that show some correlated or Hawthorne effect are held as critical proofs that change everything.

croissant-648803_640

We want quick fixes, loosening our skepticism in their pursuit. Something that we can adopt and quickly become a competent manager, 10x programmer or team, eradicate all errors and security concerns, clear blemishes, lose weight, have more energy, and eliminate those persistent headaches.

The easiest way to find yourself in someone’s favor often is to parrot their current quick-fix beliefs. “Couldn’t have said it better myself!” they’ll exclaim, declaring you the smartest person they know — barely concealed self-congratulations — because you support their current notions about NoSQL, Rust2, gluten or fat. Whether it’s their fervent advocacy of TDD, or in the evils of carbohydrates, the same ego-driven, “the more I believe and the more I advocate, the more it’s true!” flawed motive comes into play.

People in the sales industry know how to exploit this mimicry effect well, and it’s one aspect of the consulting world (where sales are a fundamental element of the role) that I find most unpalatable: Many people seek outside assistance primarily to confirm their beliefs, often while empire building or as position allies in internal turf wars.

The hiring process in many firms has sadly been diminished to a group of people with their current set of pseudo-science beliefs and cargo cult behaviors searching for someone who aligns with their biases. Who hasn’t sat in an interview where a coworker repeatedly asks specific trivia about some technology, philosophy or dogma that they very recently adopted, looking for validation of some new Belief structure?

You can usually determine the current trends by following the tech social news sites for a week or so. Then wait for the “boy, that really wasn’t a silver bullet!” follow-up cycle of blog posts a year later as trends come in and out of favor, the early adopter’s euphoria turning into a “a period of wallowing in the depths of cynicism” (taken from James Bezdek’s glorious editorial in IEEE Transactions on Fuzzy Systems), just as superfoods and macronutrients and dietary evils fall in and out of favor in the world of nutrition, as waves of converts to various trends then seek to vilify it to explain their personal failure.

What ground up ancient berry should you be mainlining today? What methodology or language or tooling is going to turn your team into superstars?

A 20 Year Comparison

My 11 year old son — who has been gaining competence in C# and JavaScript via the Unity platform for a couple of years now, motivated by the urge to create fun things for and with his friends — recently asked me what has changed in software development over my career: What innovations and progress have shot the field forward, making plying the craft today different from then.

The silver bullets[PDF], so to speak.

So I sat in a darkened room, Beethoven’s Piano Concerto No. 5 quietly playing in the background, contemplating a 20 year contrast (which was pretty much when I entered the industry as a professional developer).

2016 versus 1996, from the perspective of widespread software development practices during those two periods (e.g. that something was used in a university somewhere, or was a nascent technology or methodology or approach, made it a non-factor in the 1996 consideration). I am not considering niche development fields, so how software is developed for NASA, the military, nuclear power plants or for unique snowflake projects. Nor am I considering “sweatshop” style development where some low complexity project is farmed out across hordes of low cost, often low skill factory-style development groups. These have unique needs and patterns, and are not the subject of this conversation.

I should also explain that none of this is motivated by resistance to change or “all one has is a hammer” motives: Over the years I’ve utilized many of the innovations hyped at the time, but at a later point realized (and continue to realize) that everything old is new again, and that this is an industry of perpetual hype cycles. Object-oriented, aspect-oriented, CASE, UML, every ERD variant, Slack, DI, TDD, IRC, XP, pair programming, standing desks, office work, remote work, open plan, private offices, functional programming, COM/DCOM/CORBA, SOAP, document oriented, almost every variation of RDBSM and NoSQL solution, and on and on and on. I’ve plied them all.

So what are the factors that, in my personal opinion, really changed the field? If I were to compare work practices in 1996 versus today, what would be the things that stand out the most? The things that I would miss the most if I forced myself into a re-live-1996 programming exercise?

I’m excluding platforms as that’s a wholly separate discussion, irrelevant in this context, so of course I would miss Android and iOS and Windows 10 and modern Linux and LVM and HyperV and all of the related technologies that greatly enhance our pursuit of excellence. Here I’m talking purely about the process of crafting software, and the tools and techniques that we use.

The Things I Would Miss Most

  • Source control
  • The internet
  • The scope and quality of libraries available
  • Free tooling
  • Concurrency and thread safety

Source control has existed in some form for many decades (the best known earlier iteration being SCCS), but didn’t become widespread until closing on the turn of the century. Prior to this, many teams and individuals used a shared folder of the current source (and sadly some still do!), occasionally creating a point-in-time archive.

Source control is the enabler of collaboration, and the liberator of change. Even when working as an individual developer, source control frees us from the paranoia that we’re always on the precipice of destroying our project, allowing us at any moment to investigate what happened when, understanding the creeping change in our creations. I check in frequently, and it is the foundation that enables very rapid progress, and the metadata to recall the motives and intentions of my prior activities.

The widespread adoption of source control hugely enabled enhanced productivity, accountability and quality across the industry. We’ve gone through a several dominant tools during that period (SCCS, RCS, CVS, SourceSafe, subversion, TFS, Hg, git), and while incremental improvements bring massive advances to certain types of work (e.g. Linux kernel scale of projects), the general value was there from early on.

The internet brought obvious benefits because it allowed for close to real-time collaboration with peers across the industry, whether via Usenet newsgroups or, more recently, on sites like StackOverflow. A world of documentation and libraries and code examples came available at our fingertips (which I could contrast with a giant stack of Visual Studio manuals I started with, memorization of every API a requirement to have any sort of velocity).

Of course the Internet existed in 1996, but the ability to find people who’ve faced the same unique problem set quickly, and to learn from and adopt their discoveries, is an enormous productivity boost. Projects could get hung up on minor issues for days to weeks — some unloved Usenet newsgroup post lying ignored for weeks — where now it’s often seconds away from a fix.

Libraries allow us to develop on the backs of giants. I specifically say libraries rather than frameworks because the former is pure gain, while the latter is often much more nuanced, and the gains are often hard to qualify. Many frameworks exist primarily as a structural approach rather than beneficial code (e.g. libraries are steroids. Frameworks are a training regime), and are often the manifestation of developer ego.

Libraries allow us to create a program in minutes, on virtually any platform in virtually any language, that can receive files via HTTP/2, decompress them, decompose it, analyze it (e.g. computer vision, OCR, etc), reprocess it, and push it via XML to a far off system. The scope, scale and quality of the library universe is so enormous that almost anything is made easy.

Free tooling raised the status quo across all developers. There are a number of fantastically good IDEs and compilers and libraries that in 1996 were a significant expense. Even if you worked in a money-rich corporate space, the process of procuring tools was often so laborious and ridiculous that many teams simply hung back with sub-par tooling and outdated IDEs/compilers. Now everyone is a download away from the best tools and platforms in the world.

Concurrency and thread safety Obviously it wasn’t a real problem in 1996, however modern languages and tooling offers an enormous number of solutions for concurrency and thread safety, including in modern variants of C++. It would be crippling to develop 1996 style without these benefits if targeting modern hardware.

But What About…

Early in my career, one of the hottest this-changes-everything developments were CASE (Computer-Aided Software Engineering) tools. These very high priced tools, advertisements for which dominated every developer magazine — promised to change the field, allowing the program manager to drag and drop some requirements and generate high quality, complete tools.

UML later came and promised the same. An architect would contrive some UML diagrams, and the rest would be easy.

Both are close to irrelevant now. Both brought very little benefit, but everyone was chasing the silver bullet.

And of course I said nothing about C# (Java of course existed in 1996, though with much more rudimentary tooling), garbage collection in general, Go, C++14 or any of the other iterations, Python, and countless other languages. There are a lot of things that I love and enjoy about modern languages, but the truth is that their benefit is significantly oversold. A huge bulk of solutions we enjoy today, and many of the critical libraries and technologies that we enjoy, continue to be developed in a 1990s, if not 1980s, variation of C. Of course some newer features are used, but if for some reason C/C++ compilers all mysteriously reverted to circa-1996 variations (from a language perspective. Obviously not having newer optimizations and target language support would be detrimental), it would be relatively simple to adapt.

None of that is to say “give me the old timey ways…this new fangled stuff stinks”, but rather is simply that when developing real solutions it’s surprising how little of a difference it really makes. Whether I’m using C# or 1996 level Object Pascal, Go or C++ circa 1996…it just isn’t that big of a difference. With each iteration of the C++ spec I initially have a “ooh wow that’s great” enthusiasm, but in retrospect a lot of it feels like moving deck chairs around.

It just doesn’t make a significant difference. But we hype each iteration up as a This Changes Everything…Again! revolution.


1 Virtually every study in the programming field is some variation of “get four university juniors together and have them create a trivial project using technique 1 and technique 2. Compare and contrast and then project these results across the industry.”

In the same way many tools and techniques are heralded for the cost savings in the first hours and days of a project (which is completely irrelevant over the lifespan of real-world projects) — e.g. “schemaless” database projects where you amortize that initial savings, with a very high rate of interest, over the entire project, or the development solution that was chosen not because it offers the best results or long term success, but because the developer had a good understanding of it within the first ten minutes. None hold much predictive power regarding real projects in real scenarios.

2 Checked HN while typing this and one of the top posts was advocating rewriting some standard library bit in Rust. Rust, like Haskell, is one of those solutions that is proposed as a sure-win easy solution to almost everything, resolving all impediments to development, curing all security ills, inflating productivity and awesomeness…followed by crickets. The number of actual solutions built with them puts them on endangered lists, but in inflated rhetoric they’re a cure all for everything. And once again, someone makes some cheap commentary about fixing everything, promises some future resolution, and if it follows the pattern of all that came before, positively nothing will come of it.

Rust is hardly alone in being a silver bullet solution. Go, which I enjoy and have posted about on here multiple times, was the topic of a key value implementation a few days ago. The performance was truly terrible, the implementation questionable, but because it was in Go it got that “ooooh” hype and push to the top.

And just to be clear, Rust is a very exciting, elegant language that seeks to blend the best aspects of C (performance, predictable memory allocation and de-allocation, with scope lifetimes and reference counting — I have always been a critic of garbage collection as it’s essentially, in my mind, a hack worst-case solution to the problem of memory management. Garbage collection should be something the exists in debug mode, with an orphaned object indicating some sort of failure of the platform) with the best aspects of higher level functional and object oriented languages. I have done the tutorials and having a middling understanding of the language, and it looks great, so please don’t take this as a criticism of it. However it is caught in that void where many of the people picking it up seem to primarily just want some platform to advocate. Many of them seem to simply advocate it as an alternative to Go, and then go back to their day job using Java or whatever. It is one of those exciting languages that needs to make it over the hump of practicality.