top of page

Missouri Eastern District sanctions Pro Se Appellant for improper use of Artificial Intelligence (“A.I.”) -generated briefing

In Kruse v. Karlen, Missouri Court of Appeals, Eastern District No. ED 111172 (February 13, 2024), the Eastern District dismissed an appeal brought by the Appellant, Karlen, due to numerous fatal pleading deficiencies, including the filing of fictitious cases generated by artificial intelligence. In fact, an “overwhelming majority of the citations are not only inaccurate but entirely fictitious. Only two out of the twenty-four case citations in Appellant’s Brief are genuine… and do not stand for what Appellant purports.” Kruse v. Karlen, page 5-6. These fictitious or mis-stated cases are called “A.I. hallucinations” and occur when the generative A.I. fabricates realistic-seeming case citations or text, or purports that a real case has a holding it does not have in reality. 


While the Appellant, filing pro se, “stated he did not know that the [online ‘consultant’ from California Karlen utilized] would use ‘artificial intelligence hallucinations’ and denied any intention to mislead the Court or waste Respondent’s time researching fictitious precedent[,]” the harm to the court had already been done. Id., page 8. The Court then analyzed Appellant’s ethics obligations and the violations of the duty of candor to the Court. The Court admonished the Appellant and provided that “we urge all parties practicing before this Court, barred and self-represented alike, to be cognizant that we are aware of the issue [of A.I. hallucinations] and will not permit fraud on this Court in violation of our rules.” Id., page 10. The Court determined that Karlen’s appeal was an abuse of the judicial system and a frivolous appeal worthy of monetary sanctions, even considering the hardships faced by pro se litigants. Respondent was awarded $10,000 to go toward their legal fees. 


This case is just another in an unfortunate, nationwide trend of individuals (including attorneys) utilizing artificial intelligence tools known as “Large Language Models” or “LLMs” to generate filings used in cases. While LLMs can be effective in generating responses to some creative writing prompts, extreme caution should be utilized in relying on any generated text from LLMs, particularly ones that would be filed to the court, as the incidence of A.I. hallucinations are high. Any use of LLMs should be cross-referenced with additional legal research and review to confirm that all citations and arguments presented are not frivolous.

Recent Posts

See All


Commenting has been turned off.
bottom of page