It’s obvious from various posts over the years that I’m a big fan of nginx.
I’m a big fan of hosting apps directly with it. I’m a fan of it sitting in front of instances of IIS/WCF apps, Go apps, node.js apps, and every other permutation of technologies.
I’m a fan of it in front of hodge-podge mix-ups of all of the above, existing in a scale-out load and redundancy balanced cluster.
Adding such a vertical configuration point in your application stack offers enormous flexibility that you will come to appreciate, especially when it comes for free — nginx has negligible overhead, has proven itself to be very secure, and in many deployments can significantly improve the overall performance of your app.
For instance, using a reverse proxy with proxy-level caching is a brilliant way of adding caching to your app without having to add explicit caching to your app. It simply works, efficiently and effectively. You can use perl-expression URL rewriting, geo-ip based access controls, pre-compressed content, and so on with utter ease. It allows for a simple ability to scale out content compression and encryption from your app servers as well.
And of course, nginx generally supports the evolution of protocols long before alternatives, much less embedded libraries, do. For instance the mainline 1.5.10 just added support for SPDY/3.1, eliciting this post.
I’m a big fan of SPDY (and of course the standardized derivative HTTP 2.0), primarily where some of the consumers are hitting the site over medium to high latency connections (such as from the Far East, many mobile scenarios, or even just from California to New York). The protocol is now fully supported in the major browsers without toggling flags or going to beta variants.
“But I saw a critique where a blogger did a test of the top n web sites and it showed negligible benefit” one might say, at least if they’ve been misinformed.
That analysis, while well-meaning, had some unfortunate flaws.
Firstly, SPDY as it exists today demands TLS. The most common negative comparison holds SPDY against non-encrypted HTTP: SPDY is targeted at where we should be, not where too many sites unfortunately are, and quite rightly demands the extra cost of a secure connection. If you are deploying something that could be called an app, and you ever deal with user-provided data or authentication, you simply must use TLS: It is no longer optional. You cannot be professional and do otherwise. And from that, SPDY is only rationally comparable to HTTPS.
The second, even more egregious issue with the most common critique analysis is that the test was performed by using an intermediary non-caching proxy against remote internet sites (and only for the primary domain, meaning the current performance approach of using multiple domains for parallelization, basically circumventing a browser restriction, was still there, not using SPDY). This is akin to testing the speed of a sports car by requiring it to drive behind a transport truck, and then demanding that it also pace itself with a dozen Yugos.
That is not a rational test of SPDY, and while the author stated that they would do a follow-up based upon this obvious hypothesis and criticism, no such follow-up every appeared in the year and a half since. Yet that piece strangely still gets referenced and used as an anti-advocacy point, and sits right at the top of SPDY benchmark searches.
There are plenty of very positive reports about SPDY out there (including the very first paper on it), but they don’t get the same play.
My own deployment experience has been that SPDY is a significant net win to both metrics and user experience, but of course that platform is proprietary so I can’t give specific details. Perhaps I’ll setup a real-world scenario that demonstrates the value of SPYD.