CMOS and Rolling Shutter Artifacts / Double Precision Compute

While I finish up a more interesting CS-style post (SIMD in financial calculations), just a couple of interesting news items I thought worth sharing.

In a prior entry on optical versus electronic image stabilization I noted rolling shutter artifacts –an image distortion where moving subjects or the entire frame during motion can be skewed and distorted — and their negative impact on electronic stabilization.

During video capture, especially under less than ideal conditions, it is a significant cause of distortion that often goes unnoticed until you stabilize the frames.

Sony announced a CMOS sensor with a data buffering layer that allows it to have something approximating a global shutter (Canon previously announced something similar). While their press release focuses on moving subjects in stabilized type situations, the same benefit dramatically reduces the rolling shutter skew during motion of video capture. It also offers some high speed capture options which is enticing.

Sony sensors are used by almost every mobile device now, so it’ll likely see very rapid adoption across many vendors.

Another bit of compelling news for the week is the upcoming release of the GP100, Pascal-based workstation GPU/compute device by nvidia (they previously released the P100 server based devices).

Double-precision calculations of upwards of 5 TeraFLOPS (for comparison a 72-core/AVX-512 Knights Landing Xeon Phi 7290 offers about 3.4 TeraFLOPS of DP performance, while any traditional processor will be some two magnitudes lower even when leveraging full SIMD such as AVX2). Traditionally these workstation cards massively compromised double precision calculations, so this update brings it into much greater utility for a wide array of uses (notably the financial and scientific world where the significant digits of single precision made it unworkable).

Update

I’m okay.

Months of incredible levels of stress, coupled with weeks of little sleep and the recent sudden passing of my brother Darrell, yielded some deeply irrational, illogical thinking that I of course regret, but can’t erase from my timeline. For any sort of intellectual exercise, like software development, no sleep+stress = a recursive loop of ineffectiveness that generates more stress and even less sleep.

I apologize to those I caused distress, from family to remote individuals who I’ve never met but who cared. This is my first opportunity to post anything on this, and the lack of communications wasn’t an intentional act of dramatic exercise.

In any case, a quick set of thanks-

  • thanks to the many people who cared
  • thanks to the Halton Regional Police (the many fantastic officers, and one profoundly talented K9), who literally saved my life
  • thanks to the remarkable staff of Joseph Brant Hospital
  • thanks to the patients/community of 1 West @ JBH, who opened my eyes to the battles that so many are fighting, and offered friendship and support during a rough period

Not to make light of this ridiculous and very serious situation, but this event yielded a few life happenings that I never expected to have in my biography-

  • Chased by a number of officers through a frigid river
  • Taken down/bitten by a police K9 (who was a model of K9 professionalism, and is a beautiful, extraordinarily well-trained canine officer)
  • Committed involuntarily under the mental health act (e.g. escorted by a guard, locked ward, bed checks, restricted to a hospital gown for several days)

It was an interesting experience that I don’t plan on repeating. The week+ in the hospital gave me the quietest, most reflective period that I’ve had since…forever.

It was the first time I’ve ever truly, successfully meditated. The real, lotus-pose, mind-at-ease meditation that went on for tens of minutes.

I learned an enormous amount about myself as a result (and got various health tests that were long overdue, being conveniently located and all), and came out of it a much better person. Finally dealt with some pretty severe social anxiety that has always been a problem for me.

And for those concerned, various other things got resolved to a good outcome at the same time. The stress+overwhelming tiredness clouded my eyes to options that were available, and everything else is in a much better place.

Dennis

Things You Probably Don’t Know About Your Digital Camera

Another periphery post about imaging as a recent project brought it into my thoughts. It’s always nice to have broadly interesting, non-confidential material to talk about on here, and while this is well known to many, I’ve discovered in some projects that many in this field aren’t aware of this magic happening in their devices.

To get right to it.

Your Camera’s Imaging Resolution is Lower Than Advertised

The term “pixel” (picture element) is generally holistic — a discretely controlled illumination of any visible shade of color, whatever the mechanism of achieving that result. Your 1080p television or computer monitor has ~1920 by 1080 pixels, or about 2 million “pixels”. If you look closely, however, your monitor has discrete subpixels for the emissive colors (generally Red-Green-Blue, though there is a display technology that adds yellow as well for an RGBY subpixel arrangement). These might be horizontally oriented, vertical, staggered, in a triangle pattern…in any order.

At appropriate combinations of resolution density and viewing distance, your eyes naturally demosaic, blending the individual colors into a discrete full color representation.

The vast majority of digital cameras, however, use the term pixel in a very different way.

To visually explain, here’s a portion of RAW sensor data taken directly from a Nexus 6p. The only processing applied was source color channel gains and scaling from the original 100 by 63 to a more visible 600 by 378.

car_bayer_closer

In the digital camera world, each of these discrete colors is a whole pixel.

If you inspect the pixels you’ll notice that they aren’t full color (though that is obvious just by eyeballing the image). Each unique location is one of either red, green, or blue, at varying intensities. There are no mixes of the colors.

The imaging sensor has pits that can measure photons, but they have no awareness of wavelength. To facilitate color measurements a physical color filter is overlaid over each pit, alternating between the three colors. This is generally a Bayer color filter.

There is another type of sensor that layers wavelength sensitive silicon (much like the layers of classic film), capturing full color at each site, however it is very rarely used and has its own problems.

Green is most prevalent, comprising 50% of the pixels given that it’s the color band where the human eye is most sensitive to intensity changes and detail. Red alternates with green on one line, while Blue alternates with green on the next.

The functional resolution of detailed color information, particularly in the red and blue domains, is much lower than many believe (because of which many devices have physical and processing steps — e.g. anti-aliasing — that further reduce the usable resolution, blurring away the defects).

The Nexus 6P ostensibly has a 4032 x 3024 imaging resolution, but really, courtesy of the Bayer filter, has a 2016 x 3024 green resolution, a 2016 x 1512 blue resolution, and a 2016 x 1512 red resolution. For fine hue details the resolution can be 1/4 expectations, and this is why fully zoomed in pictures are often somewhat disappointing (also courtesy of processing and filtering to try to mask the color channel information deficiencies).

Your Camera’s Imaging Sensor Has Many Defects

Due to defects in silicon, the application of the physical bayer filter, and electrical gain noise, many of the photo sites on your digital sensor are defective.

Some read nothing, while many more see ghosts, reporting some or significant false readings. Readings of a constant brightness target will vary, sometimes significantly, across pixels (yielding a grainy, noisy output image).

falsereadings

This is a random 150 pixel wide reading from the 6p when taking a 1/10s picture of pure darkness. These defective readings cover the entire capture in varying densities, comprising up to hundreds of false data points. Most are permanent, often with new ones appearing as the device ages. Some defects temporarily worsen when the sensor is warm. Most SLRs have a special mode where it will take a full darkness picture and then catalog and remove all hot pixels from the output material. Android also has the notion of remembering hot pixels.

This is the case with every digital sensor, from your smartphone to your high end SLR. I remember being somewhat horrified first looking at a wholly unprocessed RAW image from my SLR, seeing hundreds of fully lit pixels scattered across the image.

Algorithms Saves The Day

The solution to all of these problems is processing, but it does have consequences.

Hot pixels are eliminated both through prior knowledge (a hot pixel database for a given sensor), and through simply eliminating pixels that shine a little too bright relative to her neighbors. They get replaced with an interpolated average of neighbors.

The Bayer pattern source is turned into a full color image via a demosaicing algorithm, and there is considerable academic research into finding the optimal solution. In that case I linked to an army research paper, the military having a significant interest in this field given the broad use of Bayer imaging sensors, and a need to know that the resulting images/data are the highest fidelity possible (especially given that machine vision systems are then analyzing that resulting heavily processed output, and with the wrong choices can be triggering on algorithm detritus and side-effects).

The choice of demosaicing algorithm can have a significant impact on the quality of the resulting image. Do you know what algo your device is using?

After demosaicing, color corrections are applied (both to move between color spaces, and to provide white point corrections), and then the image is de-noised — those fine grainy variations are homogenized (which can yield unique results if the subject itself has a grainy appearance — the algorithm can’t discern whether variations are from the source or from the sensor).

The resulting image is generally close to perceptually perfect, but an enormous amount of human knowledge and guesswork went into turning some very imperfect source data into a good result. The quality of an image from a digital device is as significantly impacted by software as the hardware (many devices have terrible color fringing courtesy of poor demosaicing). Which is why many choose to shoot RAW photos, saving those source single-band pixels as is before destructively applying corrections. This allows for improvements or alterations of algorithms when the magic mix didn’t work quite right for a given photo.

If you look closely at the results, you start to see the minor compromises necessary to yield a workable output.

Provably Fair / Gaming

A little bit of a diversion today, but just wanted to belatedly post a bit of commentary on the whole recent game/virtual item gambling controversy.

EDIT: 2016-07-14 – Shortly after I posted this, Valve announced that they were going to start shutting off third party API access if it’s used for gambling (no I’m not claiming they did this as a result of me posting, but rather just noting why I didn’t mention this rather big development below). This morning Twitch essentially also banned CS:GO item gambling (though they’re trying to avoid any admission of guilt or complicity by simply deferring to Valve’s user agreements).


Like all of you, work and family demands leave little time for gaming. One of the few games I enjoy — one that allows for short duration drop-in sessions and has been a worthwhile mental diversion when dealing with difficult coding problems — is Counter-Strike Global Offense (CS:GO).

The game is a classic twitch shooter. It has a very limited, curated set of weapons, and most rounds are played on a limited number of proven maps.

I’m a decent player (though it was the first game where I really had that “I’m too old for this” sense, with my eleven year old son absolutely dominating me). It’s a fun, cathartic diversion.

Nine games out of ten I end up muting every other player as the player base is largely adolescent, and many really want to be heard droning on. The worst players seem to be the most opinionated, so with every match the guy sitting at the bottom in points and frags always has the most to say about the failure of everyone else’s gameplay (this is an observation that holds across many industries, including software development. This industry is full of people who’ve created nothing and achieved little explaining why everyone else is doing it wrong).

The CS:GO community also has an enormous gambling problem, as you may have heard. This came to a head when a pair of popular YouTubers were outed as owners of a CS:GO skin gambling site. These two had posted a number of arguable “get rich….quick!” type videos demonstrating very highly improbable success, enticing their legions of child fans to follow in their possibly rigged footsteps.

Skins, to explain, are nothing more than textures you apply to weapons. The game often yields situations where other players spectate your play, and having unique and less common skins is desirable as a status thing. So much so that there is a multi-billion dollar market of textures that people will pay hundreds of dollars for (Steam operates a complex, very active marketplace to ensure liquidity).

The whole thing is just dirty and gross, with Valve sitting at the center of an enormous gambling empire mostly exploiting children all spending those birthday gift cards. It casts a shadow over the entire game, and those awaiting Half Life 3 will probably wait forever, as Valve seems to be distracted into only working on IP that features crates and keys.

The machinations of crates and keys, winning rewards that Valve provides a marketplace denominated in real currencies, is gambling: if you’re paying real money for small odds of something worth more money (again, Valve provides the marketplace and helpfully assigns the real-world value), it’s a matter of time before the hammer falls hard on these activities. Valve is operating in a very gray area, and they deserve some serious regulatory scrutiny.

Anyways, while being entertained by that whole sordid ordeal, the topic of “fair” online gambling came up. From this comes the term “provably fair”, which is a way that many gambling enterprises add legitimacy to what otherwise might be a hard gamble to stomach.

It’s one thing to gamble on a physical roulette wheel, but at least you know the odds (assuming the physics of the wheel haven’t been rigged…). It’s quite another to gamble on an online roulette wheel where your odds of winning may actually be 0%.

“You bet black 28….so my `random’ generator now picks red 12…”

So the premise of provably fair came along. With it you can generally have some assurance that the game is fair. For instance for the roulette wheel the site might tell you in advance that the upcoming wheel roll — game 1207 — has the SHA1 hash of 4e0fe833734a75d6526b30bc3b3620d12799fbab. After the game it reveals that the hashed string was “roaJrPVDRx – GAME 1207 – 2016-07-13 11:00AM – BLACK 26” and you can confirm that it hashes and that the spin didn’t change outcomes based upon your bet.

That’s provably fair. It still doesn’t mean that the site will ever actually payout, or that that they can’t simply claim you bet on something different, but the premise is some sort of transparency is available. With a weak hash (e.g. don’t use SHA1. That was demonstrative) or a limited entropy checked string it might allow players to actually hack the game. To know the future before the future.

You can find provably fair defined on Wikipedia, where the definition is suspect, seemingly posted by someone misusing it and being called on it (“it is susceptible to unscrupulous players or competitors who can claim that the service operator cheats” What?)

Anyways, the world of CS:GO gambling is a bit interesting to evaluate the understanding of the term provably fair.

csgolotto, the site at the center of all of the hoopla, does little to even pretend to be provably fair. Each of their games randomly generate a percentage value and then a hash with the value and a nonce is provided, but that does nothing to assure fairness: For the duels the player chooses a side. If the predetermined roll — which an insider would obviously easily know — was below 50, someone with insider knowledge could simply choose the below 50 side, and vice versa. Small betting differences slightly change the balance, but it has no apparent guards against insider abuse, and it’s incredible that anyone trusted these sites.

The pool/jackpot game relies upon a percentage being computed for a game — say 66.666666% — and then as players enter they buy stacked “tickets”, the count depending upon the value of their entries. So player 1 might have tickets 1-100, player 2 tickets 101-150, and player 3 tickets 151-220. The round expires and the 66.6666% ticket is #146, so player 2 wins the pot.

A variety of other CS GO gambling sites1 use the same premise. There is nothing provably fair about it. If an insider knows that a given jackpot win percentage is 86%, it is a trivial exercise to compute exactly how many tickets to “buy” to take the pot, at the right time, with the technical ability to ensure the final entry. It is obvious when to bow out of a given pool.

Some sites have tried to mix this up further, but to a tee each one was easily exploitable by anyone with insider knowledge.

There is nothing provably fair about it.

1 – I had a couple of illustrative examples of extra dubious claims of “provably fair”, including a site that added hand-rigged cryptography that actually made it even less fair for players. Under the scrutiny and bright lights, a lot of these sites seem to have scurried into the dark corners, shutting down and removing themselves entirely from Google Search.