3D XPoint is Pretty Cool

Five years ago I wrote a post about SSDs/flash storage being a necessary ingredient for most modern build outs, and opponents were wasting time and efforts by not adopting them in their stack. While it is profoundly obvious now, at the time there was a surprising amount of resistance from many in the industry who were accustomed to their racks of spinning rust, RAID levels, and so on. People who had banked their profession on a lot of knowledge about optimizing against extremely slow storage systems (a considerable factor in the enthusiasm for NoSQL), so FUD ruled the day.

Racks of spinning rust still have a crucial role in our infrastructure, often treated as almost nearline storage (stuff you seldom touch, and when you do the performance is so out of bounds of normal expectations). But many of our online systems are worlds improved with latency in microseconds instead of milliseconds courtesy of flash. It changed the entire industry.

In a related piece I noted that “Optimizing against slow seek times is an activity that is quickly going to be a negative return activity.” This turned out to be starkly true, and many efforts that were undertaken to engineer around glacially slow magnetic and EBS IOPS ended up being worse than useless.

We’re coming upon a similar change again, and it’s something that every developer / architect should be considering because it’s about to be real in a very big way.

3D XPoint, co-developed by Micron and Intel (the Intel one has some great infographics and explanatory videos), is a close to RAM-speed, flash-density, non-volatile storage/memory technology (with significantly higher write endurance than flash, though marketing claims vary from 3x to as high as 1000x), and it’s just about to start hitting the market. Initially it’s going to be seen in very high performance, non-volatile caches atop slower storage: the 2TB TLC NVMe with 32GB of 3d xpoint non-volatile cache (better devices currently have SLC flash serving the same purpose), offering extraordinary performance, both in throughput and IOPS / latency, while still offering large capacities.

Over a slightly longer period it will be seen in DRAM-style, byte-accessible form (circumventing the overhead of even NVMe). Not as literally the main memory, which still outclasses it in pure performance, but as an engineered storage option where our databases and solutions directly and knowingly leverage it in the technology stack.

2017 will be interesting.

Micro-benchmarks as the Canary in the Coal Mine

I frequent a number of programming social news style sites as a morning ritual: You don’t have to chase every trend, but being aware of happenings in the industry, learning from other people’s discoveries and adventures, is a useful exercise.

A recurring source of content are micro-benchmarks of some easily understood sliver of our problem space, the canonical example being trivial web implementations in one’s platform of choice.

A Hello World for HTTP.

package main

import (
   "fmt"
   "net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
   fmt.Fprintf(w, "Hello world!")
}

func main() {
   http.HandleFunc("/", handler)
   http.ListenAndServe(":8080", nil)
}

Incontestable proof of the universal superiority of whatever language is being pushed. Massive numbers of meaningless requests served by a single virtual server.

As an aside that I should probably add as a footnote, I still strongly recommend that static and cached content be served from a dedicated platform like nginx (use lightweight unix sockets to the back end if on the same machine), itself very likely layered by a CDN. This sort of trivial type stuff should never be in your own code, nor should it be a primary focus of optimizations.

Occasionally the discussion will move to a slightly higher level and there’ll be impassioned debates about HTTP routers (differentiating URLs, pulling parameters, etc, then calling the relevant service logic), everyone optimizing the edges. There are thousands of HTTP routers on virtually every platform, most differentiated by tiny performance differences.

People once cut their teeth by making their own compiler or OS, but now everyone seems to start by making an HTTP router. Focus moves elsewhere.

In a recent discussion where a micro-benchmark was being discussed (used to promote a pre-alpha platform), a user said in regards to Go (one of the lesser alternatives compared against)-

it’s just that the std lib is coded with total disregard for performance concerns, the http server is slow, regex implementation is a joke”

total disregard. A jokeSlow.

On a decently capable server, that critiqued Go implementation, if you’re testing it in isolation and don’t care about doing anything actually useful, could serve more requests than seen by the vast majority of sites on these fair tubes of ours. With a magnitude or two to spare.

100s of thousands of requests per second is simply enormous. It wasn’t that long ago that we were amazed at 100 requests per second for completely static content cached in memory. Just a few short years ago most frameworks tapped out at barely double digit requests per second (twas the era of synchronous IO and blocking a threads for every request).

As a fun fact, a recent implementation I spearheaded attained four million fully robust web service financial transactions per second. This was on a seriously high-end server, and used a wide range of optimizations such as a zero-copy network interface and secure memory sharing between service layers, and ultimately was just grossly overbuilt unless conquering new worlds, but it helped a sales pitch.

Things improve. Standards and expectations improve. That really was a poor state of affairs, and not only were users given a slow, poor experience, it often required farms of servers for even modest traffic needs.

Choosing a high performance foundation is good. The common notion that you can just fix the poor performance parts after the fact seldom holds true.

Nonetheless, the whole venture made me curious what sort of correlation trivial micro-benchmarks hold to actual real-world needs. Clearly printing a string to a TCP connection is an absolutely minuscule part of any real-world solution, and once you’ve layered in authentication and authorization and models and abstractions and back-end microservices and ORMs and databases, it becomes a rounding error.

But does it indicate choices behind the scenes, or a fanatical pursuit of performance, that pays off elsewhere?

It’s tough to gauge because there is no universal web platform benchmark. There is no TPC for web applications.

The best we have, really, are the TechEmpower benchmarks. These are a set of relatively simple benchmarks that vary from absurdly trivial to mostly trivial-

  • Return a simple string (plaintext)
  • Serialize an object (containing a single string) into a JSON string and return it (json)
  • Query a value from a database, and serialize it (an id and a string) into a JSON string and return it (single query)
  • Query multiple values from a database and serialize them (multiple queries)
  • Query values from a database, add an additional value, and serialize them (fortunes)
  • Load rows into objects, update the objects, save the changes back to the database, serialize to json (data updates)

It is hardly a real world implementation of the stacks of dependencies and efficiency barriers in an application, but some of the tests are worlds better than the trivial micro-benchmarks that dot the land. It also gives developers a visible performance reward, just as Sunspider led to enormous Javascript performance improvements.

So here’s the performance profile of a variety of frameworks/platforms against the postgres db on their physical test platform, each clustered in a sequence of plaintext (blue), JSON (red), Fortune (yellow), Single Query (green), and Multiple Query (brown) results. The vertical axis has been capped at 1,000,000 requests per second to preserve detail, and only frameworks having results for all of the categories are included.

When I originally decided that I’d author this piece, my intention was to actually show that you shouldn’t trust micro-benchmarks because they seldom have a correlation with more significant tasks that you’ll face in real life. While I’ve long argued that such optimizations often indicate a team that cares about performance holistically, in the web world it has often been the case that products that shine at very specific things are often very weak in more realistic use.

But in this case my core assumption was only partly right. The correlation between the trivial micro-benchmark speed — simply returning a string — and the more significant tasks that I was sure would be drown out by underlying processing (when you’re doing queries at a rate of 1000 per second, an overhead of 0.000001s is hardly relevant), is much higher than I expected.

  • 0.75 – Correlation between JSON and plaintext performance
  • 0.58 – Correlation between Fortune and plaintext performance
  • 0.646 – Correlation between Single query and plaintext performance
  • 0.21371 – Correlation between Multiple query and plaintext performance

As more happens in the background, outside of the control of the framework, invariably the raw performance advantage is lost, but my core assumption was that there would be a much smaller correlation.

So in the end this is simply a “well, that’s interesting” post. It certainly isn’t a recommendation for any framework or the other — developer aptitude and suitability for task reign supreme — but I found it interesting.

 

Everything You Read About Databases Is Obsolete

Six and a half years ago I wrote (in a piece about NoSQL) –

Optimizing against slow seek times is an activity that is quickly going to be a negative return activity.

This time has long since passed, yet much of the dogma of the industry remains the same as it was back when our storage tier was comprised of 100 IO/second magnetic drives. Many of our solutions still have query engines absolutely crippled by this assumption (including pgsql. mssql has improved, but for years using it on fast storage was as exercise in waste as it endlessly made the wrong assumptions around conserving IOs).

There are now TB+ consumer storage solutions with 300,000 IO/second (3000x the classic magnetic drive, while also offering sequential rates above 3.5GB/s…yes, big G) for under $600. There are enterprise solutions serving 10,000,000 IOPS.

That’s if your solution even needs to touch the storage tier. Memory is so inexpensive now, even on shared hosting like AWS, that all but the largest databases sits resident in memory much of the time. My smartphone has 3GB, and could competently host the hot area of 99.5%+ of operational databases in memory.

For self-hosted hardware, TBs of memory is now economical for a small business, while tens of GBs is operationally inexpensive on shared hosting.

I totally made up that 99.5% stat, but it’s amazing how relatively tiny the overwhelming bulk of databases I encounter in my professional life are, yet how much so many fret about them.

Obviously writes still have to write thru, yet when you remove the self-defeating tactic of trying to pre-optimize by minimizing read IO — eliminating denormalization, combined-storage (e.g. document-oriented), materializations and trigger renderings/precomputing, excessive indexes, etc — in most cases writes narrow to a tiny trickle1.

When writes reduce, not only does practical write performance increase (given that you’re writing much less per transaction, the beneficial yield increases), the effectiveness of memory and tier caches increases as the hot area shrinks, and the attainability of very high performance storage options improves (it’s a lot more economical buying a 60GB high performance, reliable and redundant storage tier than a 6TB system. As you scale up data volumes, often performance is sacrificed for raw capacity).

Backups shrink and systems become much more manageable. It’s easy to stream a replica across a low grade connection when the churn of change is minimized. It’s easy to keep backups validated and up to date when they’re GBs instead of TBs.

Normalize. Don’t be afraid of seeks. Avoid the common pre-optimizations that are in the whole destructive to virtually every dimension of a solution on modern hardware (destroying write performance, long-term read performance, economics, maintainability, reliability). Validate assumptions.

Because almost everything written about databases, and from that much of what you read, is perilously outdated. This post was inspired when seeing another database best practices guideline make the rounds, most suggestions circling a very dated notion that every effort should be made to reduce IOs, the net result being an obsolete, overwrought solution out of the gates.

1 – One of the most trying aspects of writing technical blog entries is people who counter with edge cases to justify positions: Yes, Walmart has a lot of writes. So does Amazon, CERN and Google too. The NY Taxi Commission logs loads of data, being in a city area of tens of millions.

There are many extremely large databases with very specialized needs. They don’t legitimize the choices you make, and they shouldn’t drive your technical needs.

Quick Update. Upcoming Post.

I haven’t posted much for a bit due to some heavy commitments with a fantastic client, however I’ve pulled an old post out of the drafts — essentially the long-delayed part III of the high performance SQL series from eight years ago — and will finish it up imminently.

It is tentatively titled “The Absolute Minimum Every Developer Should Know About Indexes“, and includes a demonstration database that you can run in PostgreSQL.

On other matters, occasionally I look at the sorts of traffic that bring people here, and I’m always amazed at some of the Google search results. Not only that Google sends so many people here, but in how incredibly obsolete technologies hang on.

A frequent search hit, for instance, is regarding the internet provider for Visual SourceSafe. I’d authored a piece almost a decade ago, it being mangled in the moves between various blog engines, but somehow it still yields some search juice.

To which I would say: People still use SourceSafe? With all of the incredibly powerful options freely and easily available, they’re still working with that derelict, obsolete, broken source control provider?

Though of course this isn’t entirely surprising. Many teams and shops have an inability to decouple from past decisions, and those things just hang around for perpetuity. How those things usually happen is that a team or company or product gets loaded with more and more baggage, zero effort or expense paid to continuously refine and remove accumulating technical debt, until eventually the solution is so overwrought and broken that the the entire thing is wholesale abandoned. Not as some voluntary choice, but rather the detritus slows it down the point that eventually it hits a breaking point and involuntary actions are taken: The internal team and product replaced with a greenfield product of an outside consultancy, for instance. It is the ballistic arc of so many projects, where necessary choices and actions aren’t taken through the lifecycle, until the problem is so great that abandonment is the only recourse.

Always be re-evaluating the stack that you use. This isn’t a call to blindly follow what is new, but honestly if you’re still using products like Visual Sourcesafe for anything, there is probably an issue.

Database Performance, LVM Snapshots, golang and sqlite3

Snapshots for Beginners

Snapshots are a very useful mechanism found in database products, disk management layers, and virtualization tools. Instead of making a complete copy of the source, snapshot functionality allows you to create the copy virtually instantly.

Snapshots can be used to perform backups without locks or transaction contention, or to save volume contents at a particular point in time, for instance because you’re about to make some risky changes that you want the ability to quickly revert, or because you need to access the data as it was at a particular point in time.

The speed of snapshots is courtesy of trickery: Snapshot technology generally doesn’t copy anything at the outset, but instead imposes overhead atop all I/O operations going forward.

VirtualBox snapshot volumes, for instance, contain the changed disk contents from the point of the snapshot forward (contrary to what one would intuitively think), leaving the base image unchanged. When you delete a snapshot it merges all of those changes down, whereas if you “rollback” the snapshot it simply deletes the change data and reverts to the root. It adds a layer of indirection to disk access as all activities need to determine first whether they apply to the changeset or the root set.

LVM (Logical Volume Manager, a Linux volume layer that adds considerable flexibility to storage on the platform) snapshots — the topic of this entry — use a very different but more common approach. The snapshot is effectively a sparse copy of the original data, acting as simply an indirection to the root volume. As you change data in the root volume it performs a copy-on-write, moving those changed disk sectors to the snapshot volume.

If you had a 40GB root volume and then created a snapshot, at the outset your root volume will of course take 40GB of space, while the snapshot takes 0GB. If you then changed all 40GB of data, the root volume would still take 40GB, while the snapshot would also take 40GB, every block of the original data having been copied over before completing each write operation.

Snapshot Performance Concerns

There is an obvious performance concern when using snapshots. Aside from read activities needing the additional indirection, all write operations against the root now require an analysis of which snapshots it impacts, and then a copy of the original data to those affected snapshots.

The overhead can be onerous, and there are many dire warnings out there, driven from artificial benchmarks. While to-the-metal, reductionist benchmarks are important, it’s often useful to assess how this impacts the whole of operations at a higher level: The impact of seemingly enormous performance changes is often very different from expectations.

Benchmarking Snapshots

So with a couple of Amazon EC2 instance types (one, a c3.2xlarge running Ubuntu 14.04 using HVM, with dual local SSDs, and the other a m1.xlarge running Ubuntu 14.04 with PV, with local magnetic drives), and a simple benchmarking tool I built with go 1.3 and sqlite3 (using the excellent sqlite3 driver, itself a great demonstration of Go’s most powerful feature being it’s least sexy), I set about determining the impact of snapshots.

800 LVM Snapshots? Why Would Anyone Ever…

Why would this matter? As a bit of background, much of my professional life is spent in the financial industry, building very high performance, broad functionality solutions for leading financial firms.

One common need in the industry is as-it-was-at-a-particular-point-in-time data browsing: To be able to re-generate reports with varying parameters and investigate the source data that yielded those outputs. In effect, a version control system for data.

That data is constantly changing as new data comes in, and building an entire data history in the database can yield enormous volumes of data, and creates a database schema that is incredibly difficult to query or utilize with any level of performance.

When it’s at the system level, such as with Oracle’s Flashback Query, it can yield a much more tenable solution, but then much of the implementation is a black box, and you don’t have the ability to flag exactly when such an archive should be built — all data churn becomes a part of the history. As if you were committing your source after every keystroke.

I needed the ability to maintain historical versions in a manner that was immediately usable, in a high performance fashion, but didn’t redundantly copy 100% of the data for each archive, which would quickly become unmanageable.

To explain, in one solution I engineered, the generated reporting set was exported to sqlite3, for a wide variety of interesting and compelling reasons. That data store is periodically incrementally refreshed with new data as the various participants certify it and gold plate it for dissemination.

While I seriously considered LVM as one component of a solution (a part of a more complex historical platform), I ended up adding block level versioning directly into sqlite3 (it is a fantastic foundation for derived solutions), but nonetheless found LVM to be a fascinating solution, and wanted to dig deeper into the performance considerations of the technology.

To the natural question that one would ask about the utility of this benchmark — if you were really starting a small database from scratch, the easiest solution is to simply make a 100% copy of the database (presuming that you have the reasonable ability to transactionally pause and fsync), quickly and easily, for every historical copy. Imagine, however, that instead you’ve already amassed a very large volume of data, and are dealing with the daily change of that data: Where the daily delta is very small on a large volume of data.

The Benchmark

The benchmark creates a table with the following definition and indexes-


create table valuation (id integer not null primary key, 
    investment_id integer not null, 
    period_id integer not null, 
    value real not null);
create index valuation_investment_idx ON 
    valuation(investment_id, period_id);
create index valuation_period_idx ON 
    valuation(period_id, investment_id);

It then iterates 95 times, adding a new “investment”, and then individually adding 10,000 period records for each investment (26 years of daily valuations), the value varied randomly between records. On the second and subsequent iteration it then updates the entire period history of the prior investment, as if updated details had come in. All of these changes occur within a single transaction per investment iteration (meaning 95 transactions in all).

Running this benchmark without the indexes, for those curious, became impossibly slow almost immediately due to the update stage needing to endlessly table scan.

For the snapshot-enabled tests it creates a snapshot of the root LVM volume between investment iterations, meaning that at the end of the run there are 95 snapshots, each of which contains the data as it was at the moment of creation (such that there is a snapshot where only the first investment exists. Then a snapshot with the first and the second, the details of the first changed. Then a third, second and first, and so on), those databases immediately usable, providing a glimpse of historic data.

The first test was run on an Amazon EC2 c3.2xlarge, which is a 8vCPU/15GB RAM instance chosen as it features two local 80GB SSDs. One SSD was mounted as an LVM volume, hosting the database and snapshots, while the other hosted the sqlite3 journal (which required changes to the sqlite3 source code, but yielded much better performance, and dramatically reduced what would have been churn across snapshots. Churn is the enemy on all systems).

SSD_SS

A few things are evident from this.

  • The more data you insert, the slower the process is. This isn’t terribly surprising, and is primarily a facet of index maintenance.
  • LVM itself imposes a small overhead. The gray represents straight to the SSD, with no snapshots, against a non LVM volume (LVM2 not even installed). Red is with LVM installed, still with no snapshots. Blue represents snapshots on every investment iteration.
  • Snapshots impose far less of an overhead than I expected, at least against SSDs. At iteration 95, each insert had the overhead of maintaining 94 snapshots as well, yet it only halved the transaction speed.

Flash memory really is the solution here, as the primary overhead of maintaining snapshot are distinct IOs, moving to the various locations on the disk. I captured the iostats at every iteration, and it really is fascinating seeing the IOPS ramp up.

As one aside, at first consideration of the methodology you might logically think that the high degree of performance was explained by the limited overlap of the snapshots — e.g. on iteration 50, only investment 50 and 49 are altered and thus shouldn’t impact prior snapshots that didn’t even include this accumulated data. Unfortunately this isn’t true: When new and novel data is added to the database, those earlier snapshots still see the copy of the “original” data, which happened to be empty space on the hard drive.

After 95 iterations, for instance, the first snapshot had 52GB of changes written to it, even though the original data captured on that snapshot was in the sub-MB range.

Before looking at how snapshots impact performance on magnetic drives, here’s a quick comparison of the benchmark run on the SSD-equipped machine, compared to running it on a magnetic-drive hosting m1.xlarge, again hosting the benchmark on one of the local drives, the journal on another.

SSD_v_Magnetic

On pure IOPS and reductionist benchmarks, the flash drives launch to a many magnitudes advantage, but in this practical implementation it managed around a 5x speed advantage in most cases.

So how do snapshots impact the magnetic drive machine?

snapshot_magnetic

The situation is dramatically worse. Instead of a 2x slowdown, there is more in the range of a 5x slowdown, yielding a whole that is some 8.5x slower than the SSD equipped machine in the snapshot scenario.

Snapshot Performance – Not Too Bad

Under a somewhat realistic test scenario, on a modern machine (which now simply must mean flash-equipped), the performance overhead of LVM snapshots was actually significantly lower than I expected. It would be much better still if it took into account file system usage, and didn’t replicate unused file blocks to snapshots that simply don’t care about them.

Further I’ve tested up to 850 LVM2 snapshots, hitting a configuration ring-buffer limit that I could circumvent if I wanted to really push limits.

Having said that, LVM snapshots have a deadly, critical fault — when you create a snapshot, you have to allocate non-shared space that will hold its changeset. As mentioned above one test saw 52GB of changes pushed to one snapshot. Not only is this fixed, if you under-estimate this capacity and it fills, LVM2 degrades in a very unpleasant manner.

This kills the machine.

Quite seriously, though, several such cases required hard resets and rebuilding the LVM2 volume set after such a condition.

In any case, under controlled conditions snapshots can be a tool in the belt. How much it impacts your specific workload depends greatly upon the sorts of IOPS you generate, and the workload overhead you incur. But it’s worth looking into.