Connect with us

Hi, what are you looking for?

The American GeniusThe American Genius

Tech News

What can we do about bias in AI?

(TECH NEWS) Each AI system is a reflection of their interaction with the humans who designed, programmed, trained, or used them – and they are reflecting out own deep-rooted biases.

ai

Skewed samples

The machines we make reflect our own biases, according to Kristian Hammond, who wrote an article breaking down the different ways an artificially intelligent system can be biased. Each system is a reflection of their interaction with the humans who designed, programmed, trained, or used them.

bar
Data-driven bias results when a system learns from a skewed sample. Often we think this won’t be a problem because of the sheer volume of examples given to facilitate machine learning, but that isn’t the case. In fact a viral video demonstrated that HP motion trackers don’t track faces with non-white skin tones, which might be just such a problem.

Your system reflects your bubble

Bias through interaction comes when smart systems learn from interacting with humans. Never has a more cautionary tale of this kind of bias come as quickly as the day-long life of Microsoft’s Tay. Tay was a twitter chat bot designed to learn based on its interactions with human tweeters. As anyone who has ever spent one day in junior high could have predicted, human users bombarded Tay with offensive statements. With this as a model, the chat bot became an aggressive racist and misogynist and had to be shut down.

Emergent bias and similarity are a little more complicated, but have to do with systems aimed at personalization.

Similar to the problem of social news bubbles, systems trained to show you what you want to see will become more biased the longer they do that thing.

Systems can also have conflicting goals, where they are told to perform a specific purpose, and the interactions with users push them towards a different one.

Advertisement. Scroll to continue reading.

Identification is key

Hammond notes we view AI and machines as cold and indiscriminating. Whether or not we think this is a design flaw or a great accomplishment, it it reinforces a misconception that the smart machines we make are objective.

After looking specifically at each way things can go terribly wrong, I have to agree with Hammond: we need to identify our own biases. At every level, from engineering to product use, we need to think of how we are making, training, and using these systems in order to prevent our own flaws from getting a Terminator-style upgrade.

#AIBias

Felix is a writer, online-dating consultant, professor, and BBQ enthusiast. She lives in Austin with two warrior-princess-ninja-superheros and some other wild animals. You can read more of her musings, emo poetry, and weird fiction on her website.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Advertisement

KEEP READING!

Business Marketing

As a small business owner or non-tech-savvy person dipping into marketing, getting free models is a dream. This tool makes it possible.

Tech News

We have to ask: Is it ethical to let AI write your science papers? Maybe not if it's racist…actually, definitely not.

Tech News

The limits of AI have been a key point of holding back its power. This new chip, inspired by none other than the brain,...

Business News

Finding a job in AI and machine learning can be a task, especially if you don't know where to look. AI Jobs is here...

The American Genius is a strong news voice in the entrepreneur and tech world, offering meaningful, concise insight into emerging technologies, the digital economy, best practices, and a shifting business culture. We refuse to publish fluff, and our readers rely on us for inspiring action. Copyright © 2005-2022, The American Genius, LLC.