The damage of defaults
Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.
The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.
But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!
The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!
A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.
Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.
It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.
Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.
Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.
Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.
I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.
Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.
We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.
Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.
The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.
Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.
But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.
Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.
When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)
The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.
But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.
With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.
And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)
Of course AI makes this problem so much worse.
Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.
The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.
Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).
You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.
Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.
Expensive earbuds that won’t stay put is just a handy visual metaphor.
And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.
It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.
Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: "The creators and designers of AI must be broadly representative of humanity."
121 faculty members listed.
Not a single faculty member is Black. pic.twitter.com/znCU6zAxui
— Chad Loder ❁ (@chadloder) March 21, 2019
Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)
As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.
And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.
The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.
Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.
Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.
Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.
Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.
But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.
So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.
It’s someone’s standard. It’s certainly not mine.
We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.
What’s clear is the future of technology design can’t be so stubborn.
It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.
Above all, it needs a listening ear on the world.
Indifference to difference and a blindspot for diversity will find no future here.