Are Robots Inheriting Our Prejudices?

A study suggests that artificial intelligence tech may be learning some bad habits from us.

Many recent advances in artificial intelligence have centered on the concept of “machine learning”. This is not a new concept, and it is an easy enough one to grasp. But with machine learning finally moving from concept to implementation, grasping the real-world implications of a technology this powerful is another matter altogether. Specifically, experts are starting to worry about what AIs might learn from us.

Doing, Saying and Machine Learning

It’s common for parents to worry that their children will pick up their bad habits. And kids often spend a good part of their childhood listening to parents say: “Don’t do as I do, do as I say”. Parents may mean this partly as a joke, but the concern is real. For instance, a child who grows up around parents who smoke cigarettes has an increased likelihood of becoming a smoker. There’s evidence for that, but it’s also just common sense.

In 1958, Arthur Samuel defined machine learning as giving computers “the ability to learn without being explicitly programmed.” In other words, the ability to do without being told. By implementing machine learning, computers have permission to do as humans do, not as humans say. In which case, people might want to start looking at their habits and asking what kind of bad behaviors AIs might pick up from us.

This is a real concern for experts in the field. A recent study published by Science suggests that AIs may be learning race and gender biases from humans. That has some potentially ugly consequences. Let’s assume that the internet will be the central resource for AIs to learn about human culture, behavior, values and history. Now think of the way Twitter trolls turned Microsoft’s Tay chatbot into a hardcore racist within 24 hours.

Gender, Race and Artificial Intelligence

That may be a slightly unfair example as it was a specific attempt to game the system (getting the robot to do specifically as you say). But it should also be clear how much casual, unquestioned prejudice is out there—particularly considering the full scope of human culture and history from which AIs could potentially draw. A great deal of classic literature includes assumptions that do not reflect the values or sensitivities of today’s society.

As a recent RT article pointed out, the Science study found that AIs tend to associate words like “female” and “woman” with the arts, humanities and the home, while “male” and “man” were associated with math and engineering. Additionally, the study found that European-American names were more often associated with pleasant terms while African-American names tended to be associated with unpleasant terms.

The libertarian wing of the tech community might argue that humans should not interfere with machine learning. But for private and public organizations investing in AI, ensuring that their robots do not take on inappropriate or destructive habits will be a serious concern. Therefore, new analytics and performance management technologies will need to be developed, able to help AIs determine between fact and prejudice, historical beliefs and contemporary mores.

The stakes are high when it comes to artificial intelligence. AIs of the future are likely to have real power and even prestige, so it’s vital to be certain they will reflect our true values.

Sam Macklin
By Sam Macklin | April 25, 2017