Ten Reasons You Should Still Use Nginx

A mostly accurate retelling of a discussion I had with a peer yesterday-

Them – “Any new projects going on?”

Me – “Well aside from {redacted} and {also redacted…good try!}, still that big project I talked about before.”

Them – “Awesome! What technology stack for this one?”

Me – {database} as the database, nginx in front of {platform} for the app layer, the presentation built on {whatever is cool}.”

Them – “Nice…but why nginx? Doesn’t {framework} give you a web server? Aren’t they redundant?”

I’ve had discussions just like this many times before for a variety of projects. Nginx in front of node.js. Nginx in front of a hydra of Windows hosted IIS and Linux hosted PHP, all magically meshed into one coherent front-end.

The assumption often being that nginx is rendered unnecessary if you have another technology that serves up some fresh HTTP.

Of course that isn’t true, and Nginx will remain the doorman for my projects for the foreseeable future, even where it sits in front of other HTTP servers that may themselves sit in front of other HTTP servers.

Theoretically you could sub in Apache and yield many of the same benefits, though Nginx has some architectural advantages that make it my choice.

nginx

The Top 10 Reasons You Should Still Use Nginx

(even in front of other HTTP servers)
  1. SSL – Restrict your SSL key to the greatest extent possible, so if you don’t have a hardware SSL offload appliance, allow the gatekeeper nginx instance to handle it securely and efficiently.
  2. HTTP/2 – It’s unlikely that your in-app HTTP library supports HTTP/2 or will anytime soon, much less supporting changes in the spec. Given that there are some HTTP/2 detractors out there, note that it primarily adds value for high latency users, and is of particular value to small web services that can’t economically deploy GeoDNS targeted servers around the world.
  3. Static Content Serving — Don’t clutter your app layer with code or artifacts for static content — limit that world to dynamic logic. In the long-term you’ll probably move to a CDN service, but projects seldom start there. Make use of precompression to instantly serve up compressed resources to clients that support it (ergo, all clients) with minimal overhead.
  4. Making Dynamic Content Static — nginx can be configured to cache dynamic proxied content per the expiration headers, allowing you to cache efficiently at the gatekeeper without adding error-prone caching code to your application layer. Again, keep your application layer as simple as absolutely possible, leaving the long-solved problems to well proven solutions.
  5. Abstract your implementation — A single exposed host can sit in front of a number of different underlying technology platforms (on one machine, or on many), the published structure being nothing more than some simple configuration points. /service/users may point to a PHP implementation, while /service/feed utilizes a Go host, and /service/api/ calls out to node.js. With Nginx in front, everything becomes flexible and abstract, your implementation amorphous and unconstrained. It also lubricates updates as you can spin up new back-end servers on different ports, update and reload the config, take down the old servers and the upgrade was entirely transparent to end users, all done through a simple deployment script.
  6. Load balancing — Nginx has built in support for load-balancing. Outside of the ability to actually load-balance (at any layer), this also facilitates removing failed backend services from any node in the structure. The flexibility and power is enormous.
  7. Rate Limiting — This is, again, one of those oft ignored functions that often catches out web apps when an avalanche occurs and there are no ways to stop it short of writing a lot of emergency code. The need is often malfunctioning clients rather than malicious actors, and it’s liberating to have a bouncer that can rate limit detrimental callers with ease, changeable at a moment’s notice.
  8. Geo Restrictions — You’ll likely have high-privilege management calls that you know will only be legitimately called from specific geographical regions. While it provides negligible actual security, adding gatekeeper geo-ip restrictions allows you to eliminate the enormous number of brute-force attacks you’ll inevitably see from the Ukraina, China, etc. By filtering out that noise, not only do you eliminate a lot of attack processing overhead, you’re left with logs that make it more likely you can actually detect and extract targeted attacks.
  9. Authentication Restrictions — Covering the same ground we’ve talked about in the prior points, having this gatekeeper allows you to, on a moment’s notice, implement authentication on any resource(s). Whether this is due to a realization of an exploit in a particular part of your code, or during a internal beta period, it’s simply flexibility that may not want to build into your application.
  10. Battle Link Rot — To give a personal experience on this, some time back I maintained a fairly popular blog on a completely different domain, with a different directory structure and technology platform (it was on a custom blog platform built with .NET/Windows, while this is WordPress on Linux). This became less important to me as I focused more on hidden-from-the-world proprietary things, so I let it sit unloved, yet somehow retained thousands of RSS subscribers (most of whom probably forgot I was in their subscription list), and frequent search engine forwards. Later I felt it (the technology) was a bit of baggage, and I wanted to use the domain for something else, so I cast it off and simply moved to something entirely different, simply demolishing what existed in the process.I became a major contributor to link rot. Many existing links throughout the web simply stopped working, countless users navigating to 401s. I did that deconstruction over a year ago, yet still the logs showed an endless procession of users trying to access no longer available content.Something had to be done. Think of the poor users!

    Okay, add that I regained a professional interest in having the internet’s ear, so to speak. In having a venue where I can get enough exposure that good ideas that are executed well might get that initial kick that allows it to succeed. I wanted to take advantage of all of that link love that I had gained. And all that was needed was a couple of basic nginx rewrite rules, transforming once derelict URLs into new, transformed URLs as permanent redirects. Upper-case to lower-case, a new directory structure, hyphens instead of underscores, and so on.

    And the traffic returned almost immediately Google transferred that old link rank to the new domain, and I’m seeing search queries come through again. Because nginx provided the flexibility.

    The same happens with real apps as teams evolve the API structure. Again, and this is a recurring theme, you don’t want to bake this into your application logic: Remove everything extraneous that a system like nginx can provide.

Such is why I have and will continue to layer nginx in front of other solutions. It adds enormous flexibility and deployment opportunity, and it would be a serious mistake to eschew it, or to litter your code with reinventions of the wheel when such a good wheel already exists.