A New York lawyer is facing possible disciplinary action for citing a fake case in court papers generated by AI, the latest hiccup for attorneys and courts in navigating the evolving technology.
Jae Lee used ChatGPT to perform research for a medical malpractice lawsuit, but she acknowledged in a brief to the US Court of Appeals for the Second Circuit that she did not double-check the chatbot’s results to confirm the validity of the non-existent decision she cited.
Lee told Reuters that she is “committed to adhering to the highest professional standards and to addressing this matter with the seriousness it deserves.”
Generative AI models that power chatbots are known to “hallucinate,” or provide inaccurate information or made-up details. This process of filling in the blanks is necessary to provide ChatGPT’s creative responses, but problems can arise when the AI fabricates details—especially those that can have legal consequences.
A federal appeals court in New Orleans has proposed requiring lawyers to certify that they either did not rely on AI tools to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings. Lawyers who don’t comply with the rule could have their filings being stricken or face sanctions. Some attorneys have pushed back on the proposed rule.
Check out the slideshow above for three other times fake AI-generated citations have surfaced in court cases in recent years — whether intentionally or not.