Are You Ready For Sexist Robot Coworkers?

Robots are becoming part of the workforce faster than many industry experts predicted. This can have particularly unsettling implications for women and minorities.

Take, for example, my experience at the recent Collision tech conference where I interacted with a particularly adorable humanoid robot that’s already in operation in the US, France, and Japan. It told me about its ability to determine whether people were happy or sad based on whether or not they’re smiling. The interaction went something like this:

Right now, I can see if you’re smiling or not.
(Robot pauses. I smile.)
I don’t see a smile.
(Robot pauses while I smile bigger.)
Come on, we’re having a good time here.
(Robot pauses. I’m no longer smiling.)
Why aren’t you smiling? Is it because I’m not cute?
(Robot lowers head, looking forlorn.)

Initially, the charming 4-foot robot had delighted me. After this exchange, I got scared.

At best, the robot’s script was tone-deaf. Something tells me the robot’s developers have never been admonished for having a “resting b*tch face” or encouraged to smile more.

At worst, it felt manipulative, coercive. As one woman put it, it’s similar to how a man might say, “Come on, have another drink. We’re having a good time here,” when trying to get a woman to sleep with him.

Let me be clear, I don’t think this robot was trying to sleep with me – this wasn’t a Vodka-peddling sexbot – but I do think it was programmed by someone who had absolutely no idea what it would sound like to a woman

Tech isn’t neutral. It’s a reflection of the beliefs and attitudes of the people who make it. Mariya Yao is Head of Research & Design at TOPBOTS, a strategy and research firm for enterprise artificial intelligence and robots. She writes, “AI experts pride themselves on being rational & data-driven, but can be blind to issues not captured with numbers...Biases of creators trickle down to their creations.”

The robots do what we tell them to, and if the only people determining how robots “think”, interact, and behave aren’t clued into how the robots will impact women and minorities, the robots will simply perpetuate the problems we’re already seeing among marginalized communities.

Here are two new challenges you can expect your robot coworker to introduce:

1. If they aren’t beta tested with women and minorities, they are unlikely to work effectively with them.

While the script made me uncomfortable, it’s also worth noting that the tech itself didn’t work for me. I was smiling, but the robot didn’t recognize it. When a male friend went through the same prompts, the robot recognized his smile instantaneously.

It’s impossible to say whether this has to do with my gender, but that wouldn’t be without precedent. At Collision, Telle Whitney, CEO and President of the Anita Borg Institute for Women in Technology, described a voice recognition software for nurses that bombed in medical settings because it didn’t recognize women’s voices.

In addition to the tech industry's challenges with women, tech innovations are struggling with people of color. Facial recognition software may have a racial bias, with programs that are far better at correctly identifying white faces than African American ones. This has particularly problematic implications as law enforcement in cities across the country relies more heavily on facial recognition to identify potential criminals.

When the problems are logistical or hinder the robot’s performance (like ineffective voice or facial recognition software) we can expect companies employing robots to take notice and adjust the technology. When the issue is interpersonal (like tone deaf language), the impact on the bottom line is much more difficult to quantify, and advocating for change is less likely to be effective.

Once they’re hired, they’ll be extremely difficult to fire.

Robots aren’t cheap. If a company has successfully integrated robots into their operations and the robots are serving their intended purpose well, the company has no incentive to investigate employee concerns.

Think about what Susan Fowler was told at Uber: the manager who sexually harassed her was a “high performer” and therefore only got a warning. Now imagine that instead of one human high performer, the manager is thousands of robots that are fully integrated into the company’s operations.

In Fowler’s case, the very department designed to govern “human relations” was complicit and refused to take action. With robots, it’s possible that even the best HR team wouldn’t be able to ensure appropriate oversight because it’s not a human relationship to manage.

Then there’s the question about who within a company would handle these complaints, and what their end goal would be. Would it be AI experts tasked with ensuring the robots are operating effectively? An HR team who is at least partially focused on diversity, equity, and inclusion? Or perhaps some newfangled HRR Department covering Human/Robot Relations?

It’s difficult to imagine a world in which the feelings of the more readily replaceable sentient employees take priority over the enormous capital investment a corporation has made integrating robots into operations. And the more robots are relied upon, the more difficult it will be to effect change.

As robots become more integrated into our culture – and they will – companies need to seriously consider what oversight mechanisms are in place to ensure human workers feel safe and supported. And, for my money, getting more women and minorities in tech and in company leadership has never felt more urgent.