Life has kept me busy for the past few months, but good content is incoming. Promise.
This is a random observational post while I take a break from real work. I’m revisiting a topic that I touched upon before, and ultimately this is really just a lazy rewriting of that piece.
A few days ago I saw a new commercial for Toronto’s SickKids hospital.
The commercial is powerful.
“This is new and fresh and important, so I’ll share it with the people I know on Facebook”, I thought.
It isn’t original content, obviously, but I thought it was something they’d find interesting.
So I shared it. Seconds later I deleted the post.
I don’t post on Facebook (or Google+, or Twitter) outside of the rare photo of the kids limited to family. By deleting I was returning to my norm.
Most of the people among my contacts have trended toward the same behavior, with a small handful of social feeders alone among the whole. Most now use Facebook for discussion groups and as a feed aggregator: If a site (e.g. Anandtech) shares on Facebook, I just rely upon it appearing in my feed rather than visiting their site. It’s also a great feed for game day news as well.
Individual sharing is trending way down on Facebook. Many other sites are showing the same trend. LinkedIn feels like a graveyard of abandoned profiles, and “celebrities” who have assistants post various self-promotional pieces occasionally (I recently deactivated my LinkedIn profile after realizing that I have gotten zero value from it over my career, yet have gotten a lot of negative consequences including just unnecessary exposure of information random people don’t have any need to know).
We have like, share and retweet fatigue. It sits there as a little judgy footer on every post, each reaction carefully meted out and considered. As a social obligation both on our own posts, and on the posts of our friends and family.
So if I post something and it sits un-liked, should I be offended? Should I fish for likes, building a social crew? If my niece posts something interesting, should I like it or is that weird that I’m her uncle liking her post? If a former female coworker posts an interesting recipe, should I like it or is that going to be perceived as an advance?
If I get a pity like from a relative, should I reciprocate?
Some will dismiss this as overthinking, but what I’m describing above is exactly what this service, and every one like it, is designed to demand as your response. It is the gamification of users, used masterfully, and the premise is that if you make the social counts front and center, it obligates users towards building those numbers up. Some shared blog platforms are now plying this tactic to entice users to become essentially door to door pitchmen to draw people to the platform (as they sharecrop on someone else’s land, repeating a foolish mistake we learned not to make well over a decade ago), lest their blog posts get deranked. People aren’t pitching Avon or Amway now, they’re trying to get you to help them make a foundation for their medium blog or pinterest board or Facebook business group or LinkedIn profile or…
Sometimes it works for a while as a sort of social pyramid scheme. Eventually the base starts to stagnant, the “incentives” lose their luster if not rusting and becoming a disincentive for newer or more casual users. If it isn’t carefully managed, the new users will cast the old guard as obsolete and irrelevant.
I made a Soundcloud account purely to access a personal audio recording across multiple devices, so why do I keep getting notifications of spammy followers, all of whom are front and center on my profile that I don’t want? I don’t want followers or hearts or likes or shares.
Let me qualify that statement a bit: I love when readers think that these blog posts are interesting enough to share on various venues, growing the circle of exposure. That happens organically when readers thinks content is worthwhile, and it’s very cool. But that is something that the reader owns, and doesn’t sit as a social signal of relevance on this page: There are no social toolbars or tags on this post trying to act as a social proof that this is worth reading, beyond that most of you have read these missives for a while and I assume find some value in them.
Users should absolutely have these curating inputs (training the system on the things that they like and dislike), and the feed should of course adapt to the things the user actually enjoys seeing: If zero users find anything interesting that I post, zero people should see it. But by making it a public statement it becomes much more than that, losing its purpose and carrying a significant social obligation and stigma that is unwanted.
Virtually every social site follows the same curve as we all dig the social well, and when it runs dry we simply chase the next experience. Facebook has done well by pivoting the use of the service, but other services (Flickr, Twitter, and others) that attempted the same strategy peaked and then hit a period of stark decline: if someone with less than 100 twitter followers are perceived as “angry and disenfranchised”, new users find more benefits simply waiting out this generation, or moving to something new — a sort of ground zero where everyone goes back to the beginning again — than to try to gain some namespace among established users.
Back in the early days of the Internet, MUDs (definition varies) saw the same curve. Each instance would start as fresh greenfield full of opportunity and excitement. As the initial euphoria settled, soon it was a small set of regular users, maxed out in every regard. Now that the pyramid scheme of fresh meat was exhausted — new users knew that there was little fun or benefit to be had, and went to newer, fresher sites, leaving the existing users with their blessed armor and no skulls to smash — malaise set in. Eventually the universe was wiped and begun anew.
There’s no real aha or “what to do” out of this. I don’t know what to make of it. Clearly the tactic is fantastically successful in the ascent part of the curve, and has been leveraged masterfully by a number of sites, but if you don’t pivot at the right time it ends up turning a site into Slashdot or Kuro5hin — a decayed remnants of a yesteryear internet.
Software-development focused online communities skew male, and generally younger (e.g. 20s to mid 30s). Most live in dense urban areas (the Bay, Seattle, NYC, London, etc), often in smallish apartments and condos. Few have families.
As is a side effect of the human condition, there is a tendency to cast one’s own lot as either the inevitable situation for most (e.g. people who have it different from me are living on borrowed time), or as if one’s personal situation is a more righteous, principled choice (better for the planet, development, futurology, etc. This is an observation I personally learned seeing myself do exactly these rationalizations over time).
Stories declaring or predicting the end of the suburbs always do well on these forums. I’ve seen poorly researched stories predicting this topping sites like Hacker News again and again: It tells the audience what they want to hear, so the burden of proof disappears.
But this assumption that suburbs are doomed has always struck me as counter-intuitive. While there has been a significant migration from rural areas to urban areas in virtually every country of the world (largely as rural workers are effectively displaced by factory farming or obsolete skillsets and it becomes a basic survival pattern), suburbs seem to be as bulging as they’ve ever been. In the United States, for instance, the fastest growing areas of the country have been medium density (e.g. suburbs and small cities). Very high density areas have actually been dropping much faster than even rural areas have.
The suburbs aren’t actually dying.
But maybe they will soon? The argument that they will is completely predicated on a single thing happening: We’re going to run short on oil, transportation costs are going to explode, and suddenly the onerous costs of living in far flung places is going to cause mass migration to the city centers, everyone defaulting on their giant suburban McMansion mortgage, the rings turned into a desolate wasteland.
Increasingly it seems like we’re more likely to keep oil in the ground than to run out. Alternatives to this easy energy source are picking up pace at an accelerating rate. As electric vehicles hit the mainstream, they’re becoming significantly more economically viable options for high mileage drivers (fuel for electric cars costs somewhere in the range of 1/5th per mile compared to gasoline, even in high cost areas like California). Where the miles are much cheaper from some solar panels or a windmill than they are a gallon of gasoline, even at the current depressed prices. And that’s excluding the significant mechanical costs of internal combustion engines that would soon be dramatically undermined by mass produced electric vehicles.
You can go much further for less than ever before, with the specter of oil’s decline being less and less relevant. If anything transportation is going to get a lot cheaper.
Of course the commute itself has always been a tax on life, and personally I can say that I quit jobs after doing the grueling big city commute. Only we’re on the cusp of our car doing the driving for us. Very soon the drive will be quality time to catch an episode of a good show, or maybe a quick nap. The capacity of even existing roadways will dramatically increase once you remove human failure and foible.
Connectivity…well everyone everywhere is connected, incredibly inexpensively. When I was a kid we had to pay $0.30/minute+ to talk to people 20km away. Now continent wide calling is free. Internet connectivity is fast and copious almost anywhere. Many of us work remote jobs and it really doesn’t matter where we are.
I’m not making any predictions or judgments, but the inputs to the classic assumptions has changed enormously. I recently was entertaining the idea of living even more remote (right now I work at home in a rural area exurb of Toronto — this doesn’t qualify as even suburbs — but there are of course far more remote areas of this country), and it’s incredible how few of the factors are really compromised anymore: I’d still have 100s of channels, high speed internet, 24/7 essentially free communications (text, audio, video, soon enough 360 vision at some point with depth) with every family member, overnight delivery of Amazon Prime packages, etc.
Being in a less dense area just isn’t the compromise it once was. And that’s talking about fully rural areas.
The suburbs — where you still have big grocery stores and bowling alleys and neighborhood bars and all of the normal accouterments of living — just aren’t much of a compromise at all. When someone talks up the death of the suburbs, I marvel at the 1980s evaluations of the 2010s world. I would argue the contrary: a few communicable disease outbreaks (e.g. SARS v2) and humans will scurry from density.
While in negotiations I have removed a few older posts temporarily. The “Adding Secure, Authenticated HTTPS Interop to a C(++) Project” series, for instance.
I can’t make the time to focus on it at the moment and don’t want it to sit like a bad promise while the conclusion awaits (and for technical pieces I really try to ensure 100% accuracy which is time consuming), and will republish when I can finish it. I note this given a few comments where helpful readers thought some sort of data corruption or transactional rollback was afoot. All is good.
Occasionally I write things on here that lead some to inaccurately extrapolate more about my position. In a recent post, for instance, I noted that Rust (the system language) seems to be used more for advocacy — particularly of the “my big brother is tougher than your big brother” anti-Go sort — than in creating actual solutions.
I don’t think Go demolishes Rust. Rust is actually a very exciting, well considered, modern language. It’s a bit young at the moment, but has gotten over the rapid changes that occurred earlier in its lifecycle.
Language tourism is a great pursuit for all developers. Not only do we learn new tools that might be useful in our pursuits, at a minimum we’ll look at the languages we do use and leverage daily in a different way, often learning and understanding their design compromises and benefits through comparison.
I would absolutely recommend that everyone give Rust a spin. The tutorials are very simple, the feedback fast and rewarding.
When selling oneself, particularly in an entrepreneurial effort where you’re the foundation of the exercise and your abilities are key, you can’t leverage social lies like contrived self-deprecation or restraint. It’s pretty much a given that you have to be assertive and confident in your abilities, because that’s ultimately what you’re selling to people.
This doesn’t mean claims of infallibility. Instead that you have a good understanding of what you are capable of doing based upon empirical evidence, and are willing and hoping to be challenged on it.
A few days ago I had to literally search whether Java passes array members by reference or value (it was a long day of jumping between a half dozen languages and platforms). I’m certainly fallible. Yet I am fully confident that I can quickly architect and/or build an excellent implementation of a solution to almost any problem. Because that’s what my past has demonstrated.
Generally that goes well. Every now and then, however, I’ve encountered someone who is so offended by pitch confidence that, without bothering to know a thing about me, my accomplishments, or taking me up on my open offer to demonstrate it, they respond negatively or dismissively. This seems to be particularly true among Canadians (I am, of course, a Canadian. This country has a very widely subscribed to crab mentality, however, with a “who do you think you are?” sort of natural reaction among many). Not all Canadians by any measure, but enough that it becomes notable when you regularly deal with people from other countries and start to notice that stark difference.
A year and a half ago I wrote an entry on here regarding Intel in the mobile space. The argument was basically that Intel was finally getting their stuff together, and the market had gotten ready for Intel and x861 (as well as x86_64) to be a fully supported platform.
From Unity to the NDK to AVDs, Intel is now a first-class platform on Android.
But the industry runs at a very different cost and profit model from what Intel was accustomed. The highest-end ARM SoCs run from $30 – $70 per unit. Intel has long lived in a world where their solutions net hundreds to thousands of dollars per unit. But the market changes, and the ARM world isn’t going away if Intel just looks the other way.
Yet Intel seems to have just killed off their aspirations for the market. Their intentionally sabotaged Atom solutions are being bested by small competitors, and they can’t make the finances work.
Bizarre. I find it hard to believe, especially given that Intel has made significant noise about targeting the IoT market. I think the conclusions that people are drawing about Intel killing off the mobile Atom devices and a noncompetitive radio chipset — concluding that Intel is crawling back into their desktop and server processor shell, ceding defeat — highly unlikely.
More likely, I would guess that Intel is going to follow Nvidia’s lead, as there’s no way they’re simply giving up on mobile devices. Nvidia once had separate mobile and desktop engineering, with the duplicated costs that entailed, but with their Maxwell chipset the same designs, architectures and processes are used on both sides of the fold.
I expect Intel to pursue the same approach, simply scaling up and down their common contemporary core to all needs. There are Skylake processors available right now with a TDP of 7.5W (which is the going range for tablet SoCs). Core M processors with a TDP below 4W. The Atom processors didn’t serve a particular need beyond being sabotaged just enough that they didn’t threaten the more expensive markets. That approach doesn’t work anymore.
1 – As an aside, it’s impossible to discuss x86(_64) without someone confidently announcing that it’s a derelict bad design that deserves to die, etc, carrying on an argument from literally the late 1980s to the early 1990s. This betrays a general ignorance about the state of x86_64 vs ARM64, or the enormous complexities of modern ARM chips (with absolutely staggering transistor counts). They’re both great solutions.