How to avoid building a biased AI
6 min read
“Alexa, tell me the news?”
This is probably not far off something you’ve absentmindedly asked your Amazon Echo, Google Assistant, Cortana or Siri while sipping your morning coffee, letting her neutral, cordial tones shower you with information about the world.
The popularity of Alexa and other smart speakers is increasing, and according to YouGov these devices are now present in 11% of all British homes. It’s predicted that internet-connected devices such as smart speakers, smartphones and web-connected TVs will outnumber actual humans by 2021. Yet we know so little about them, how they work, why they act the way they do and how they are shaping our future.
Let’s start at the beginning of voice robot history. The familiar automated voices we have today are actually the grandchildren of those used in the warning systems of World War II fighter jets. A calm, tranquil, cordial, female voice was favoured by the 40s fighter pilots, who lovingly named the announcer “Bitching Betty”.
Betty quickly became the default vocal idealisation of virtual help, and her voice characteristics and associations echo throughout the history of similar systems. The voice of the London Underground is named Sonia, apparently because she gets on ya nerves. Betty’s British military cousin is known as “Nagging Nora”. Charming.
In her article “Gendering a Warbot”, Dr Heather Roff, Senior Research Analyst at Johns Hopkins University’s Applied Physics Laboratory explains: “Masculine humanoid robots will be deemed ideal warfighters, while feminine humanoid robots will be tasked with research or humanitarian efforts, thereby reinstituting gendered roles.”
It seems we keep building more Betties simply because we’re following the lead of a team of all male, all white, ivy-league-educated pilots, from 80 years ago. Do we consider why we associate the voice of a supermarket till more readily with a woman, and that of a super-destructive-fighter-robot with a gruff man who assures us that he’ll be back. We are whittling our unconscious biases into the robots and AI that we’re building, selling and welcoming into our homes.
So what’s the problem with fighterbots mimicking males and say, nursing robots having more female characteristics? It’s just a machine after all.
As Roff explained, robots enhance and confirm antiquated biases on gender roles. But we must also consider that at a time when we increasingly let machines control our daily lives, our decision-makers may be biased. Several such examples of prejudiced robots have already been popping up in the news.
According to Reuters, Amazon recently “fired” the robot they’d spent almost five years developing because it was penalising female applicants. According to The Globe and Mail, the University of Washington studied the Google Search Engine and also found a bias against women. At the time of the study, 27% of American CEOs were women, but the Google Image search of “CEO” only displayed female CEOs in 11% of the images, with “CEO Barbie” ranking higher than most real ones.
Sexism is just one of the challenges facing consciousness creators. Stories of racist technological dysfunction are becoming more frequent. From iPhones not recognising differences between the faces of women with East-Asian traits, to the government having to interfere with controversial use of facial recognition tech by the police.
In a recent study led by MIT researcher Joy Buolamwini and partner Timnit Gebru on several AI facial-analysis programmes from major tech companies, the systems showed a 0.8% error rate when analysing faces of light-skinned men, compared to a 34% error rate when trying to analyse faces of dark-skinned women. While technology is supposed to be the great blind equaliser, offering universal benefits, it seems at times AI is unintentionally tailored to the white man.
Robots are built by humans, and so reflect the way that humans interact. The good news is that if we don’t like the reflection we see, it is within our power to change it. A simple solution might be to ensure that the teams building robots, are representative of the populations the robots are intended to serve. As a founder of any tech-related company will know, hiring such a workforce is no easy task, but the opportunity to be seized is also a big one. Making tech that serves more people appropriately, will increase your market potential. Making tech that serves more people, will shape the future.
So what can we do? We can start by resetting, and ensuring that we are doing everything we can to create a work environment, recruitment process and company culture that is inclusive towards your non-typical tech employee and encourage others to do the same. And take a leaf out of Nora’s book, and nag. Be self-critical, open, ask questions and educate yourself about your own biases and their potential impact.
Not to be overly dramatic, but developing products on a basis that isn’t representative can be dangerous, and even fatal. Take the example of car safety standards and the crash test dummy. By using 6 foot, 180kg models, testing has unintentionally put at risk anyone shorter or lighter. This leaves women and children 17% more likely to die in a car crash when controlled for other factors. Let’s not make similarly dangerous assumptions in digital, when designing, for example, self-driving cars.
We must be questioning of the technology we create, sell, use and abuse. Your Alexa has probably answered hundreds of your questions. The beginning of rebooting our sexist robots may not start with robots at all, but with more people asking themselves why their robots are the way they are, how this came to be, and if they are happy with the results.
Perhaps the correct question to ask is “Alexa, why are you a woman?”