Saturday, January 31, 2026

Unlock AG Pro Today

Why Now?

AG Pro gives you sharp insights, compelling stories, and weekly mind fuel without the fluff. Think of it as your brain’s secret weapon – and our way to keep doing what we do best: cutting the BS and giving you INDEPENDENT real talk that moves the needle.

Limited time offer: $29/yr (regularly $149)
✔ Full access to all stories and 20 years of analysis
✔ Long-form exclusives and sharp strategy guides
✔ Weekly curated breakdowns sent to your inbox

We accept all major credit cards.

Pro

/ once per week

Get everything, no strings.

AG-curious? Get the full-access version, just on a week-to-week basis.
• Unlimited access, no lockouts
• Full Premium archive access
• Inbox delivery + curated digests
• Stop anytime, no hoops

$
7
$
0

Get your fill of no-BS brilliance.

Pro

/ once per year

All in, all year. Zero lockouts.

The best deal - full access, your way. No timeouts, no limits, no regrets.
A year for less than a month of Hulu+
• Unlimited access to every story
• Re-read anything, anytime
• Inbox drop + curated roundups

$
29
$
0

*Most Popular

Full access, no pressure. Just power.

Free
/ limited

Useful, just not unlimited.

You’ll still get the goods - just not the goodest, freshest goods. You’ll get:
• Weekly email recaps + curation
• 24-hour access to all new content
• No archive. No re-reads

Free

Upgrade later -
we’ll be here!

Unlock AG Pro Today

Why Now?

AG Pro gives you sharp insights, compelling stories, and weekly mind fuel without the fluff. Think of it as your brain’s secret weapon – and our way to keep doing what we do best: cutting the BS and giving you INDEPENDENT real talk that moves the needle.

Limited time offer: $29/yr (regularly $149)
✔ Full access to all stories and 20 years of analysis
✔ Long-form exclusives and sharp strategy guides
✔ Weekly curated breakdowns sent to your inbox

We accept all major credit cards.

Pro

/ once per week

Get everything, no strings.

AG-curious? Get the full-access version, just on a week-to-week basis.
• Unlimited access, no lockouts
• Full Premium archive access
• Inbox delivery + curated digests
• Stop anytime, no hoops

$
7
$
0

Get your fill of no-BS brilliance.

Pro

/ once per year

All in, all year. Zero lockouts.

The best deal - full access, your way. No timeouts, no limits, no regrets.
A year for less than a month of Hulu+
• Unlimited access to every story
• Re-read anything, anytime
• Inbox drop + curated roundups

$
29
$
0

*Most Popular

Full access, no pressure. Just power.

Free
/ limited

Useful, just not unlimited.

You’ll still get the goods - just not the goodest, freshest goods. You’ll get:
• Weekly email recaps + curation
• 24-hour access to all new content
• No archive. No re-reads

Free

Upgrade later -
we’ll be here!

What can we do about bias in AI?

Skewed samples

The machines we make reflect our own biases, according to Kristian Hammond, who wrote an article breaking down the different ways an artificially intelligent system can be biased. Each system is a reflection of their interaction with the humans who designed, programmed, trained, or used them.


Data-driven bias results when a system learns from a skewed sample. Often we think this won’t be a problem because of the sheer volume of examples given to facilitate machine learning, but that isn’t the case. In fact a viral video demonstrated that HP motion trackers don’t track faces with non-white skin tones, which might be just such a problem.

Your system reflects your bubble

Bias through interaction comes when smart systems learn from interacting with humans. Never has a more cautionary tale of this kind of bias come as quickly as the day-long life of Microsoft’s Tay. Tay was a twitter chat bot designed to learn based on its interactions with human tweeters. As anyone who has ever spent one day in junior high could have predicted, human users bombarded Tay with offensive statements. With this as a model, the chat bot became an aggressive racist and misogynist and had to be shut down.

Emergent bias and similarity are a little more complicated, but have to do with systems aimed at personalization.

Similar to the problem of social news bubbles, systems trained to show you what you want to see will become more biased the longer they do that thing.

Systems can also have conflicting goals, where they are told to perform a specific purpose, and the interactions with users push them towards a different one.

Identification is key

Hammond notes we view AI and machines as cold and indiscriminating. Whether or not we think this is a design flaw or a great accomplishment, it it reinforces a misconception that the smart machines we make are objective.

After looking specifically at each way things can go terribly wrong, I have to agree with Hammond: we need to identify our own biases. At every level, from engineering to product use, we need to think of how we are making, training, and using these systems in order to prevent our own flaws from getting a Terminator-style upgrade.

#AIBias

Felix Morgan, Staff Writerhttps://felixmorgan.net
Felix is a writer, online-dating consultant, professor, and BBQ enthusiast. She lives in Austin with two warrior-princess-ninja-superheros and some other wild animals. You can read more of her musings, emo poetry, and weird fiction on her website.
Subscribe
Notify of
wpDiscuz
0
0
What insights can you add? →x
()
x
Exit mobile version