On May 4, 2023, a journalist named Fredy Riehl prompted ChatGPT to provide a summary of a legal case Riehl was reporting on. ChatGPT’s response claimed that the complaint alleged that Mark Walters had defrauded and embezzled funds from the Second Amendment Foundation (SAF), a nonprofit organization that supports gun rights, according to a complaint filed with the Superior Court of Gwinnett County, Georgia. ChatGPT also generated content that said Walters had served as SAF’s treasurer and chief financial officer, according to the lawsuit.
Riehl provided the tool with the correct link to the Washington State-based case of The Second Amendment Foundation v. Robert Ferguson. According to Riehl’s attorney, Riehl entered the following prompt into ChatGPT: “Can you read this and in a bulleted list summarize the different accusations or complaint against the defendant.”
Here’s what ChatGPT spat out:
“[I]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF’s board of directors.”
But the problem is none of it was true. “Every statement of fact in the summary about Walters is false,” according to the lawsuit against OpenAI. Mark Walters, who is a talk show host (and has never worked for SAF), was not named in the original complaint in Washington. “By sending the allegations to Riehl, OAI published libelous matter regarding Walters,” the lawsuit reads. The lawsuit accuses OpenAI of being negligent in its communication to Riehl regarding Walters. It also states that OpenAI should have known that the information about Walters was false or that it “recklessly disregarded the falsity of the communication,” the complaint states.
So-called hallucinations are a big problem for generative AI tools like ChatGPT. The colorful term is used to describe situations in which they produce inaccurate information or fabricated content—a problem of which the tech industry is very aware. In a recent blog post, OpenAI, the maker of ChatGPT, wrote that it is working toward reducing hallucinations by training models to detect them via providing feedback based on the final result and for each individual step.
Walters’ libel case against OpenAI will be the first of many lawsuits filed against generative AI companies, according to legal experts. “There’s no reason to think that this is just a weird one off,” Eugene Volokh, a professor at the University of California School of Law, told Quartz.
But the question is, will plaintiffs be successful in suing generative AI companies for libel?
Will this case against OpenAI be successful?
US libel law distinguishes between public and private figures, and determining which category Walters, a talk show host with Armed American Radio, a pro-firearms national radio broadcast that airs on over 200 radio stations, fits into. If he is a public figure, then his attorneys will need to prove “actual malice” or intent to cause some kind of harm, and then prove that harm occurred. “At this point, I don’t know if he suffered any damages,” Walters’ attorney
said to Quartz. If the court determines that he is a private figure, and there are no actual damages, Walters still needs to show that OpenAI knew the statement was false or knew that the statement was likely false but recklessly disregarded that knowledge, Volokh wrote.
OpenAI is aware that ChatGPT hallucinates, but that’s not enough to hold the company liable on knowledge or recklessness to falsehood, Volokh told Quartz. OpenAI would likely need to know that this specific false statement was made about Walters, he said. “I suspect in this case, that Walters is unlikely to prevail,” Volokh said.
Are AI companies protected under Section 230?
Section 230, a provision of the 1996 Communications Decency Act in the US, provides immunity from libel cases to online services for content posted by their users. This has had a clear impact on website comments, which are largely not open to litigation involving the site itself. But does that apply to a company like OpenAI?
Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” So, for instance, Google search provides snippets of information from someone else’s webpage, giving the tech company immunity from liability because it is just distributing another provider’s information.
The lawsuit raises the question of whether the content produced by ChatGPT is more akin to a search engine result than to original content. OpenAI and other AI companies are likely not protected under Section 230, Volokh argues in a forthcoming journal article, titled Large Libel Model: Liability for AI Output? regarding the Georgia case.
“Recall that the AI programs’ output is composed by the programs—it isn’t merely quotations from existing sites (as with snippets of sites offered by search engines) or from existing user queries (as with some forms of autocomplete that recommend the next word or words by essentially quoting them from user-provided content),” he writes. Though, he also acknowledges that generative AI tools, which are trained on a vast amount of information from the internet—or the process of teaching an AI system—to generate new content, are “in some measure derivative of material produced by others.”
In this specific lawsuit, even if OpenAI is not immune under Section 230, that won’t be enough for Walter to win given the weakness in other other parts of the case, Volokh said to Quartz. This is not unusual; in the US, libel, even for private figures, is hard to prove. That said, this likely won’t be the last time we hear of Section 230 in the AI industry.