Connect with us

Hi, what are you looking for?

The American GeniusThe American Genius

Tech News

AI isn’t ready to make unsupervised decisions, and it may never be

The development of AI has come a long way, but we’ve come to the conclusion it’s not ready to make unsupervised decisions. Here’s why:

ai robot outside on ipad

We have been discussing AI a lot recently, probably because it’s wiggling its way into our everyday lives. Remember when we fully believed having a cell phone on hand 24/7 wouldn’t be a thing? Well it is, and AI may be right on its heels.

CompTIA reported AI statistics for 2022 and stated,

“86% of CEOs report that AI is considered mainstream technology in their office as of 2021. 91.5% of leading businesses invest in AI on an ongoing basis.”

It doesn’t seem like AI is going anywhere anytime soon. In reality, it’s likely to become more commonplace.

Most, if not all of us, can agree that artificial intelligence is not ready to make unsupervised decisions. I don’t think it ever will be, not as long as bias, prejudice, and misinformation continue to float around.

In fact, The Harvard Business Review made a point to say,

Advertisement. Scroll to continue reading.

“AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.”

Now that doesn’t mean it can’t make decisions at all but they should be limited to things that solely run on numbers, like calculating strategic risk with money or helping to make assessments on yearly business performance. But when it comes to choices that require a more human touch, it should be taken from the equation.

For instance, at the tail end of my call center career, the company switched to monitoring calls with an AI system. The system would detect inflection and tones within the call and give the agent a rating. Those with lower ratings would suffer disciplinary action determined by the system.

It seems like a clever way to allow managers to free up time for other things, right? However, it didn’t work that way. Several of the best agents in the building were facing severe disciplinary action because of the AI. At one point, 25 out of the 30 agents on my team were on their 2nd warning by the end of the first month.

Managers, who had been previously told to let the AI “do its job” decided to listen to the calls themselves. They soon discovered that the agents were not the source of the problem. It was clearly the customers shouting and commotion causing the AI to detect verbal abuse, but not able to differentiate where it was coming from.

At that point though, it was too late. Those who faced harsher scrutiny eventually quit and one was fired before their call had even been properly reviewed. By the time I left the position, all the calls the AI picked up and graded were required to be reviewed by managers. The three calls per agent a week they had to review prior to the AI had then turned into hundreds per month.

More serious use of AI has been in court decisions. Back in 2017, Brookings reported use within the courtroom. It stated that Compas, the AI in question at the time, incorrectly judged black defendants more than white defendants. Though Compas was questioned, they insisted that the AI was formed solely on historical data.

Advertisement. Scroll to continue reading.

The problem with that is, historically people of color have been judged more harshly than white individuals. So the AI will consider someone of color guilty more often than not even when they are innocent. The cycle continues and the ‘good intentions’ this AI may have been built on are gone. When sentencing or doing parole hearings this should be done by a human, not AI. They can’t take into account good behavior in prison or willingness to change. Or any of those factors that make someone more human.

AI simply isn’t ready to groove to the music on its own. If poor design is placed into the AI, poor choices will come out. This is why I firmly believe it may never be ready to make decisions on its own.

The human condition just won’t allow for it because no human is perfect with pristine moral standing and a heart of gold. We all have our flaws and opinions others may not agree with meaning our AI will too. It’s hard to say which direction artificial intelligence may go in but we have to continue to tame our own thoughts and feelings in order to one day make choices based on actual facts and actual unbiased data.

A native New Englander who migrated to Austin on a whim, Stephanie Dominique is a freelance copywriter, novelist, and certificate enthusiast. When she's not getting howled at by two dachshunds or inhaling enough sugar to put a giant into shock, she is reading, cooking or writing about her passions.

Advertisement

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Advertisement

KEEP READING!

Tech News

Amazon says they're familiar with ChatGPT, and even use it themselves, but have they positioned Alexa to be the better bot?

Business Marketing

One TikToker is going viral promoting a "nonworkaholic" philosophy and advocating for selling a ChatGPT-written resume or language translation.

Tech News

Amazon quietly applied for a patent that confirms why we've been so nice to AI voice assistants in the event they can someday understand...

Tech News

One thing Alexa has been missing is conversation context - and now Amazon hopes this that new technology will help it evolve further.

Advertisement

The American Genius is a strong news voice in the entrepreneur and tech world, offering meaningful, concise insight into emerging technologies, the digital economy, best practices, and a shifting business culture. We refuse to publish fluff, and our readers rely on us for inspiring action. Copyright © 2005-2022, The American Genius, LLC.