The uptick in AI-generated content in recent months has inspired a fierce debate amongst both critics and supporters regarding ethics and privacy. One AI tool, ChatGPT, stands out as a particularly menacing contender for the “most likely to be used for evil” award.
ChatGPT has only been in the public eye for a little over a month. In that time, experts have predicted a wide array of gloomy scenarios ranging from the tool resulting in an increase in undetectable plagiarism to flat-out racial profiling and misinformation dissemination. MakeUseOf adds another fun project to that tally: convincing scams.
Like any machine learning tool, ChatGPT draws from virtually countless sources across the internet to create its content. Because ChatGPT operates in this way, it is more than capable of creating readable, convincing, and even likable copy.
It doesn’t take a genius to realize that this copy could be used to scam people more effectively than ever before–especially because, as MakeUseOf points out, the most common deterrent to scam attempts involves looking for poor spelling or grammar in the offending message. ChatGPT is advanced enough to avoid these errors and instead craft completely reasonable messages to unsuspecting recipients.
Another glaring problem that ChatGPT poses on the scam front is its flexibility to create different responses. Since ChatGPT will always return a different–even if only slightly so–answer to a prompt, anyone using it for nefarious means is able to create a variety of unique messages without any extra effort. This is an issue for two reasons, the most notable of which is Google’s inability to recognize and flag non-repeating messages as scams.
The other issue is that users themselves will be unlikely to pick up on the similarities between AI-generated scam messages, making them that much more effective. Virtually anyone with an email address has received some variation of the scam that offers to transfer an obscene amount of money from a dying relative (or surprise member of royalty, or so on); the likelihood that ChatGPT would repeat this well-known formula is almost zero.
Regulation around AI use for the content is still relatively sparse. In the coming months, ChatGPT may very well inspire a new wave of legislation, but it’s hard to see how an AI tool that writes so convincingly could be filtered with the same effectiveness as current spam filters.
Jack Lloyd has a BA in Creative Writing from Forest Grove's Pacific University; he spends his writing days using his degree to pursue semicolons, freelance writing and editing, oxford commas, and enough coffee to kill a bear. His infatuation with rain is matched only by his dry sense of humor.
![](https://149369349.v2.pressablecdn.com/housing/wp-content/uploads/sites/2/2023/05/The-Real-Daily-Tic.png)