OpenAI, the creator of ChatGPT, is being sued for libel after the controversial AI tool accused a Georgia radio host of a variety of illicit behavior, much of which adds up to embezzlement.
Gizmodo reports that the radio host, Mark Walters, came up in a request for a report on a legal case–SAF v. Ferguson–during a journalist’s research. The report mentioned Walters in passing, saying that he had been accused of embezzling money from the Second Amendment Foundation (SAF) during his time as treasurer and CFO of the SAF.
Chief among the many problems with this accusation is the fact that Walters was never employed by–or officially associated with–the SAF, much less in the capacity of treasurer or CFO.
Walters is suing on the basis that these claims could subject him or his reputation to “public hatred, contempt, or ridicule,” and both he and his lawyer have categorically rejected the accusation as “false and malicious”. Gizmodo also notes that a quick pass over the SAF v. Ferguson reveals literally no mention of Walters.
For their part, OpenAI acknowledged that ChatGPT and other large language models can occasionally “hallucinate” and regurgitate inaccurate information, referring to the act of catching these mistakes as a “critical step towards building aligned AGI”. They also promised that a more accurate version is on its way.
Dystopian as it may be, ChatGPT’s fever dream in and of itself may not be enough to support the libel suit. Eugene Volokh, a professor of law at UCLA and the founder of The Volokh Conspiracy–an acclaimed online legal resource–expresses some doubts about the suit’s viability, namely citing a lack of evidence that “actual malice” was present in OpenAI’s operation.
“There may be recklessness as to the design of the software generally, but I expect what courts will require is evidence OpenAI was subjectively aware that this particular false statements [sic] was being created,” he explains.
Volokh also cited Section 230 of the Communications Decency Act–a provision that, among other things, protects websites quoting other sites’ information from libel suits–as a possible protection, but adds that ChatGPT’s outright fabrication of content probably exempts it from Section 230 immunity, saying that “if the program composes something word by word, then that composition is the company’s own responsibility.”
It’s worth noting that, while this is the first time someone has sued OpenAI for ChatGPT’s misinformation, it’s far from the first time ChatGPT has generated false claims. Whether or not this lawsuit proves successful, its visibility could very well spur others to…uh…follow suit.
Jack Lloyd has a BA in Creative Writing from Forest Grove's Pacific University; he spends his writing days using his degree to pursue semicolons, freelance writing and editing, oxford commas, and enough coffee to kill a bear. His infatuation with rain is matched only by his dry sense of humor.
