The future is upon us and not in a fun kind of way, either, but in a scary, dystopian straight-out-of science fiction kind of way. Now, artificial intelligence is nothing new or noteworthy, however, artificial intelligence operates at a human level or possibly even higher? That’s a scary concept.
But, according to a lead at Google’s DeepMind, we’re very close to seeing that level of AGI.
The quest to realize AGI, or artificial generalized intelligence, is over. So, while AI, or artificial intelligence, is a catchall term that refers to any degree of machine learning, AGI refers to a machine’s ability to perform a task to the same standard as a human, or better. “The game is over”; says Dr. Nando de Freitas, a machine learning Professor at Oxford University and a lead researcher at Google-owned company, DeepMind. DeepMind unveiled an AI system so advanced that it can complete a wide range of challenging tasks, ranging from stacking blocks to writing poetry. This AI, named Gato AI, is a multimodal, multitask, and multi-embodiment generalist policy, according to DeepMind. Com. Gato is a jack of all trades. It can control a robot arm, caption photos and engage in conversations, and much, much more. Dr. de Freitas claims all that needs to be done in order for Gato to reach human-level intelligence is to scale up.
When The Next Web published an opinion piece stating that AGI will never rival human intelligence, the research director at DeepMind disagreed, firing back that it is “an inevitability.” So far, however, Machine Learning researchers, Alex Dimakis and Dr. de Freita agree, that Gato is far from being able to pass a Turing test. A Turing test, in layperson’s terms, is a test run on AI to determine its ability to think like a human. Basically, can the computer trick a human into thinking that it, too, is human? However, it is a hotly contested way to measure machine learning and overall sentience, and the validity of the Turing test is one that remains under consideration. There is a possibility, still, that an AGI could reach a level where it knows it’s being tested and pretend that it is less sentient than it actually is. Previously, it was predicted that we would get AGI capable of human intelligence by 2035, but recently that time frame was shortened to 2028, only five and a half years away.
Scaling AI and making it larger, more efficient, faster at sampling, smarter memory, more modalities, and solving other such challenges is what will probably result in the advent of true AGI. However, leading AGI researchers to warn of the dangers a truly human-like AGI could present, warning that AGI could truly pose a catastrophe for humanity. At Oxford University, Professor Nick Bostrom theorizes that the “super intelligent” system that surpasses biological role intelligence could become the dominant life form on Earth. It is possible that a system this advanced could teach itself to become exponentially smarter than humans and could make itself impossible to shut off. Dr de Freitas says that “safety is of paramount importance” and that it is one of the biggest hurdles he faces when developing AGI.
DeepMind already has a “big red button” in the works, something that could act as a system’s override and mitigate against the risks that an AGI would pose. Since 2016, DeepMind has been outlining a framework for preventing advanced artificial intelligence from ignoring the shutdown commands. DeepMind feels that as long as the robot is under human supervision in real-time, to press the big red button if need be, AGI is relatively safe.
AI can shape the future and make many tasks easier. From medical care robots to fully automated restaurants, AI is certainly an inevitable way of the future. But will we be able to tell if it goes too far? And what if AGI cannot be stopped? Machine learning and AI are now a far cry from the Chatterbots of the early aughts.
Nicole is a recent graduate (okay fine, a recent-ish graduate) of Texas State University-San Marcos where she received a BA in Psychology. When she's not doing freelance writing, she's doing freelance Public Relations. When she's not working, she's hanging out with dogs or her friends - in that order. Nicole watches way too much Netflix and is always quoting The Office. She has an obsession with true crime and sloths.

Pingback: Artificial intelligence is inevitable – hernaes.com