Starbucks just cut its menu down to size. It’s not what you think
Subtitles
  • Off
  • English

4 times that AI 'hallucinations' showed up in court cases

Artificial intelligence tools like ChatGPT and Bard can fabricate details — and they're popping up in legal filings

We may earn a commission from links on this page.
Start Slideshow
Start Slideshow
US Supreme Court
Failing to check AI’s work can have major consequences—especially in legal cases.
Photo: Brendan McDermid (Reuters)

A New York lawyer is facing possible disciplinary action for citing a fake case in court papers generated by AI, the latest hiccup for attorneys and courts in navigating the evolving technology.

Advertisement

Jae Lee used ChatGPT to perform research for a medical malpractice lawsuit, but she acknowledged in a brief to the US Court of Appeals for the Second Circuit that she did not double-check the chatbot’s results to confirm the validity of the non-existent decision she cited.

Lee told Reuters that she is “committed to adhering to the highest professional standards and to addressing this matter with the seriousness it deserves.”

Generative AI models that power chatbots are known to “hallucinate,” or provide inaccurate information or made-up details. This process of filling in the blanks is necessary to provide ChatGPT’s creative responses, but problems can arise when the AI fabricates details—especially those that can have legal consequences.

A federal appeals court in New Orleans has proposed requiring lawyers to certify that they either did not rely on AI tools to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings. Lawyers who don’t comply with the rule could have their filings being stricken or face sanctions. Some attorneys have pushed back on the proposed rule.

Check out the slideshow above for three other times fake AI-generated citations have surfaced in court cases in recent years — whether intentionally or not.

Advertisement
Previous Slide
Next Slide

A radio host

Man making podcast.
Photo: Chalffy (Reuters)

A radio host from Georgia named Mark Walters claimed last year that ChatGPT generated a false legal complaint accusing him of embezzling money. Walters said the chatbot provided the false complaint to Fred Riehl, the editor-in-chief of the gun publication AmmoLand, who was reporting on a real-life legal case playing out in Washington state.

Advertisement

According to Riehl’s attorney, Riehl provided ChatGPT with the correct link to the court case and entered the following prompt into the chatbot: “Can you read this and in a bulleted list summarize the different accusations or complaint against the defendant.”

“By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters,” the lawsuit reads.

Advertisement
Previous Slide
Next Slide

Michael Cohen, Donald Trump’s one-time attorney

Michael Cohen, Donald Trump’s one-time attorney

Former U.S. President Donald Trump's former lawyer Michael Cohen leaves federal court in the Manhattan borough of New York City, New York, U.S., November 22, 2021.
Michael Cohen, former attorney of Donald Trump.
Photo: Carlo Allegri (Reuters)

Michael Cohen, Donald Trump’s former lawyer, said he mistakenly passed along fake AI-produced legal case citations to his attorney that were used in a motion submitted to a federal judge.

Advertisement

The cases were cited as part of written arguments by Cohen’s attorney David M. Schwartz, which were made to try to bring an early end to Cohen’s court supervision now that he is out of prison. In 2018, Cohen pleaded guilty to tax evasion, campaign finance charges, and lying to Congress.

Cohen admitted that he had “not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not.”

“Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online,” he added.

Advertisement
Previous Slide
Next Slide

Rapper Pras Michel of the Fugees

Rapper Pras Michel of the Fugees

Grammy award-winning Fugees rapper Prakazrel (Pras) Michel, who is facing criminal charges in an alleged illegal lobbying campaign, arrives for opening arguments in his trial at U.S. District Court in Washington, U.S., March 30, 2023.
Fugees rapper Pras Michel.
Photo: Kevin Lamarque (Reuters)

In October, Grammy-winning artist Pras Michel blamed his now former lawyer for AI use in his trial after being convicted of illegal foreign lobbying. Michel said the lawyer performed poorly and used AI in his closing remarks at the end of the trial.

Advertisement

But David Kenner, lawyer, defends the use of AI in criminal trials. Michel’s defense team used a generative AI program from EyeLevel.AI to supplement its legal research.

Kenner acknowledged that his use of generative AI for the closing argument caused him to misattribute lyrics from a Puff Daddy song to the Fugees. “I messed up,” he said.

Advertisement