Go deeper - join us!
Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.
Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.
Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”
The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.
In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.
Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.
Read also: Is artificial intelligence going too far, moving too quickly?
Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.
AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.
The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.
That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.
The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up.




