Earlier this year (2015) I put out an app for Android called Gallus. As regulars know, it was a stabilizing hyperlapse-capable video recorder for Android (unlike Instagram’s solution on iOS, it wasn’t just for stabilizing hyperlapses, and offered a variety of other features including interval frames, locked focus/exposure, advanced filters, etc). Everyone else will know nothing about it given that it was incredibly obscure, but regardless it captured the attention of a couple of industry heavyweights, making the whole effort worthwhile.
I’m very proud of what I built. The Nexus 6p — a recent addition to my device portfolio — is the first device to really showcase what the app is capable of, and I consider the project an enormous personal success.
Gallus was very technically challenging, and did something claimed impossible (still not achieved by anyone else). It doesn’t work great on every device, though, because not every device in Android land is great. And when someone has a device with a terrible gyroscope or a system image with invalid field of view settings, they’re going to blame Gallus and not their device. Such is the fun of the Android world if you’re doing anything more than a couple of basic forms1.
But it had a terrible interface as the technology was my focus, not the UI. The settings page was just awash in obscure settings to alter (owing to the extreme variance of Android devices, an issue that makes such a solution magnitudes harder than doing the same on iOS), and the main interface…well personally I thought it was perfectly fine, but a lot of users seemed to have incredible trouble figuring it out.
While I understood every complaint about the mess of a settings page, the difficulty with the main interface surprised me: The iconography wasn’t completely literal, but it was built with the intention that users would just try the buttons and quickly discover what each does. This was a wrong assumption, as a recurring feedback trend were users toggling basic settings off and then complaining about the result. Stabilization, for instance, is a toggleable button with a “shaking camera” on it, only available during playback/preview, yet a number of users would disable this, apparently inadvertently, and then complain about stabilization no longer working. Another button lets you disable/enable smartzoom (a toggleable button with “crop” iconography on it), yielding a result where the frame jumped around in the render (although the actual scene was perfectly stabilized, and personally I think was a fantastic solution and I prefer it over forced zoom), and again people would disable it and then complain about weird black bars appearing around their video.
Many people have more time and motivation to complain than they do spending seconds trying out buttons to figure out what they do. This isn’t a “resent the user” statement, but is a learned observation and I suppose could be considered beneficial: For every user that complained about a trivial interface, how many more just uninstalled and moved on?
I always intended to do some sort of inline help. The “wizard” sort of thing that does bouncing arrows and walkthroughs of the interface as a series of steps: This is how you enable/disable stabilization. This is how you render to a video file that you can share. Etc. It was never financially worth the time or effort (for a free app with no ads or monetization beyond being a vehicle to pitch some technology), but if I were going to do something that was my imagination of how I would. In a way sort of like the obnoxious “tutorial” stage of many games where you try clicking past each forced interaction as quickly as possible.
Which was all a big egotistical way of getting to the real subject of this post, which is that Google recently started filling all of their Android apps full of “Okay, Got It” staged tutorials. When you open that newly updated camera app it has a multi-step tutorial. The same for Gmail, Google Maps, and so on. Generally these appear on first run, though they seem to keep retriggering them on minor updates even though the information hasn’t changed, and force you to step through the various pages before you can use the app.
It’s amazing how quickly “Okay, got it” fatigue sets in — it’s a concept that works in isolation, but diminishes in value at scale. While I imagine that it benefits green users, to most established users it quickly becomes a nuisance. As I’ve talked about before, the worst time to pester users is when they start the app. When I’m sitting on the side of the road in Niagara Falls trying to find my way out of some suburban enclave, having my navigation request bring me to a Google Maps tutorials wizard is not wanted, needed, or beneficial. When I pull up the camera app to take a picture of a quickly passing moment, having the camera use that moment to teach me how to use it is…well it’s the worst possible moment for it.
There has to be a better solution. Some sort of master help interaction of the platform, triggered and observed by any app that decides to. The Okay, Got It approach does not scale, and already I’d say I’ve seen less than 5% of the content of these screens.
1 – Recently there was a bit of a hoopla around various camera apps not working correctly with the Nexus 5x. To explain this issue, the image sensor can be mounted in two variations relative to the natural portrait position of the device. It just turns out that for the rear camera every single device all did it a single way, to the point that it was assumed to be a given and many apps didn’t even have a code path for the alternative. The obvious conclusion is to standardize this (it should never have been a variation to begin with, but at this point should be entrenched), and if the sensor is inverted, flip it at the system level before presenting it to the application. To minimize the hassle every single app has to go through when this could be standardized at higher levels. I’ve complained about this before, where instead of doing the work at the system level, it becomes a problem for every app developer to deal with, often poorly.
Not for Google though, and you see this endlessly throughout Android: Simple things that should be standardized either as a demand of the hardware, or as a shim standardization offered by the system, are not, so every app has to have countless permutations. To make it worse, you then have to just hope that your permutation actually works, or obtain every possible variation of device. I have sensor rotation code in Gallus, but I have no idea if it actually works correctly on the Nexus 5x (I mean I know it should work, but many times there has been a schism between how I think things will work and how they actually do, so I say that with no confidence).