A personal injury lawyer used ChatGPT as research when preparing a legal brief for federal court. The cases generated by the Open AI tool were fake. The lawyer now faces sanctions. It’s one of the first cases in the legal community dealing with AI hallucinations, which are defined as statements that are either factually incorrect or nonsensical to the prompt.
Roberto Mata alleges that he was injured by a metal serving cart on a 2019 Avianca flight. He is suing the Columbian based airline. The case is pending in the Southern District of New York. Avianca sought to have the case dismissed. In the response filed by Mata’s lawyers, they used several cases to show precedent. Judge P. Kevin Castel found that six of those cases were made up. He called it an “unprecedented circumstance.”
Steven A. Schwartz of Levidow, Levidow & Oberman filed an affidavit in which he revealed that he had used ChatGPT to research the motion and referenced the cases that the AI tool had referred him to. The bot assured him that the cases were real. Schwartz has assured the judge that he wasn’t acting in bad faith. The judge held a hearing to determine whether sanctions should be imposed. It’s being argued that the law firm and Schwartz have already faced damage to their reputations and accusations of fraud, which is punishment enough.
The reliability of AI tools
Some say that Schwartz could have avoided all these issues if he had just verified the cases he cited. New technology cannot fully be trusted without vetting its output. However, it should be noted that just a week ago, an eating disorder hotline shut down an AI Chatbot that “may have given harmful advice.” There’s another case out of Georgia in which ChatGPT accused a man of fraud and embezzlement in a false case. He’s suing the creators of ChapGPT for defamation.
There are very few legal cases against chatbot tools, but this may be just the tip of the iceberg. The technology is still very new. Chatbots are still being tested for use in many industries. Although there are some terrific developments, the problem of disinformation, or hallucinations, is still a big limitation. There’s no substitute for human verification of the accuracy of the output by AI chatbots.