This is a real concern for experts in the field. A recent study published by Science suggests that AIs may be learning race and gender biases from humans. That has some potentially ugly consequences. Let’s assume that the internet will be the central resource for AIs to learn about human culture, behavior, values and history. Now think of the way Twitter trolls turned Microsoft’s Tay chatbot into a hardcore racist within 24 hours.
Gender, Race and Artificial Intelligence
That may be a slightly unfair example as it was a specific attempt to game the system (getting the robot to do specifically as you say). But it should also be clear how much casual, unquestioned prejudice is out there—particularly considering the full scope of human culture and history from which AIs could potentially draw. A great deal of classic literature includes assumptions that do not reflect the values or sensitivities of today’s society.
As a recent RT article pointed out, the Science study found that AIs tend to associate words like “female” and “woman” with the arts, humanities and the home, while “male” and “man” were associated with math and engineering. Additionally, the study found that European-American names were more often associated with pleasant terms while African-American names tended to be associated with unpleasant terms.
The libertarian wing of the tech community might argue that humans should not interfere with machine learning. But for private and public organizations investing in AI, ensuring that their robots do not take on inappropriate or destructive habits will be a serious concern. Therefore, new analytics and performance management technologies will need to be developed, able to help AIs determine between fact and prejudice, historical beliefs and contemporary mores.
The stakes are high when it comes to artificial intelligence. AIs of the future are likely to have real power and even prestige, so it’s vital to be certain they will reflect our true values.