Tech News

What can we do about bias in AI?

ai

(TECH NEWS) Each AI system is a reflection of their interaction with the humans who designed, programmed, trained, or used them – and they are reflecting out own deep-rooted biases.

Skewed samples

The machines we make reflect our own biases, according to Kristian Hammond, who wrote an article breaking down the different ways an artificially intelligent system can be biased. Each system is a reflection of their interaction with the humans who designed, programmed, trained, or used them.

bar
Data-driven bias results when a system learns from a skewed sample. Often we think this won’t be a problem because of the sheer volume of examples given to facilitate machine learning, but that isn’t the case. In fact a viral video demonstrated that HP motion trackers don’t track faces with non-white skin tones, which might be just such a problem.

Your system reflects your bubble

Bias through interaction comes when smart systems learn from interacting with humans. Never has a more cautionary tale of this kind of bias come as quickly as the day-long life of Microsoft’s Tay. Tay was a twitter chat bot designed to learn based on its interactions with human tweeters. As anyone who has ever spent one day in junior high could have predicted, human users bombarded Tay with offensive statements. With this as a model, the chat bot became an aggressive racist and misogynist and had to be shut down.

Emergent bias and similarity are a little more complicated, but have to do with systems aimed at personalization.

Similar to the problem of social news bubbles, systems trained to show you what you want to see will become more biased the longer they do that thing.

Systems can also have conflicting goals, where they are told to perform a specific purpose, and the interactions with users push them towards a different one.

Identification is key

Hammond notes we view AI and machines as cold and indiscriminating. Whether or not we think this is a design flaw or a great accomplishment, it it reinforces a misconception that the smart machines we make are objective.

After looking specifically at each way things can go terribly wrong, I have to agree with Hammond: we need to identify our own biases. At every level, from engineering to product use, we need to think of how we are making, training, and using these systems in order to prevent our own flaws from getting a Terminator-style upgrade.

#AIBias

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top