AI Hallucinations in Court: Every Lawyer Needs to Read This Before Their Next Filing


⚠️ Last Updated: March 2026 | This post is updated regularly as new cases emerge.


There is a number every lawyer using AI should know: 979.

That is how many court decisions worldwide have been documented — as of early 2026 — involving AI-generated hallucinations in legal proceedings, according to researcher Damien Charlotin’s database, which has become the legal profession’s unofficial scoreboard of shame. New entries arrive almost daily. The Fifth Circuit Court of Appeals has already declared there is “no end in sight.”

The number started at zero in 2022. By April 2023, it was small enough to track by hand. By the end of 2025, roughly 90% of all documented cases had occurred in that year alone.

This is not a niche technology problem. It is a profession-wide crisis — and if you are using AI tools in legal practice without a verification protocol, you are one careless filing away from joining this list.

This post does three things:

  1. Explains what AI hallucinations are and why they keep happening despite years of warnings
  2. Profiles the landmark cases every legal professional needs to know, with key lessons from each
  3. Gives you a practical, court-tested verification checklist you can implement in your firm today

Bookmark it. Share it with your colleagues. And check back — because this story is not over.


Part I: What Is an AI Hallucination — and Why Does It Keep Happening?

The word “hallucination” has been contested by some judges. An Australian federal court put it well in JML Rose Pty Ltd v Jorgensen (No 3) [2025] FCA 976, noting that the term “seeks to legitimise the use of AI” and that erroneously generated references are more accurately described as “fabricated, fictional, false, fake.” That reframing matters, because it captures what’s actually at stake.

AI language models — including ChatGPT, Google Gemini, and even legal-specific tools — generate text by predicting the most statistically likely next word based on training data. They do not access legal databases. They do not retrieve cases from Westlaw or LexisNexis. They do not “know” whether a case exists. They predict what a case citation should look like, and generate something plausible.

OpenAI’s own research acknowledges the root cause: language models hallucinate because training and evaluation procedures reward generating answers over acknowledging uncertainty. The model is incentivised to produce something — even when the honest answer is “I don’t know.”

The result is citations that look exactly right: they have correct formatting, realistic case names, realistic docket numbers, and are sometimes attributed to real courts, real judges, and real publications. They simply do not exist.

There is a second, subtler failure mode that courts are increasingly encountering: the AI cites a real case but misrepresents its holding — sometimes stating the exact opposite of what the court decided. In Noland v. Land of the Free, L.P. (Cal. App. 2nd, 2025), a California appellate court found that nearly all the quotations in a lawyer’s brief were fabricated, even though most of the cases he cited do exist. The AI had populated the citations with invented judicial language.

This distinction matters enormously. It means that looking up the case name in Westlaw is necessary but not sufficient. You must also read the decision.


Part II: The Warning Signs Every Lawyer Should Know

Before walking through the case history, here are the red flags that should immediately trigger manual verification:

  • A citation you don’t recognise at all — trust your instincts
  • Unusually convenient holdings — if the AI-generated case perfectly supports your argument with no nuance, be suspicious
  • Internal inconsistencies — references within the “opinion” that don’t match the citation, year, or jurisdiction
  • Links returning 404 errors — a dead link to a cited case is a major warning sign
  • A judge’s name attached to an opinion you’ve never heard of — check it
  • Quotes that sound too perfectly articulate — real judicial language has texture and context; AI-generated quotes often read as polished but placeless

If any of these appear, stop. Verify before filing. The next section shows what happens when lawyers don’t.


Part III: The Landmark Cases — A Chronological Record

🔴 THE CASE THAT STARTED IT ALL


Mata v. Avianca, Inc. S.D.N.Y. | June 22, 2023 | Fine: $5,000 | Status: Landmark

This is the case that put the entire legal profession on notice. It began as a routine personal injury matter — a passenger struck by a serving cart on an Avianca flight — and became one of the most cited cautionary tales in modern legal practice.

Attorney Steven Schwartz, facing a motion to dismiss, turned to ChatGPT for case law research. ChatGPT obliged, generating an opposition brief populated with citations to cases involving fictitious airlines, with fabricated quotations and invented internal citations. Six cases were entirely made up. Schwartz testified at the sanctions hearing that he was “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.”

What made this case worse — and what drove the sanctions — was what happened next. When opposing counsel flagged that the cases couldn’t be found, neither Schwartz nor supervising attorney Peter LoDuca immediately disclosed the AI’s involvement. They continued to advocate for the fake cases. LoDuca admitted he received the opposing brief flagging the non-existent citations and didn’t read it before forwarding it to Schwartz.

Judge P. Kevin Castel of the Southern District of New York fined the attorneys and their firm $5,000 jointly and required them to write letters to each judge whose name had been falsely attributed to one of the fabricated opinions. His ruling established the framework that governs virtually every AI sanctions case since: “Existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

The lesson: The sanction was not for using ChatGPT. It was for failing to verify, failing to disclose, and failing to withdraw. The cover-up made it worse.


🔴 THE FIRST STATE APPELLATE SANCTION


Noland v. Nazar et al. (Noland v. Land of the Free, L.P.) California Court of Appeal, 2nd District | 2025 | Fine: $10,000 | Status: Precedent-setting

This case became the first California state appellate court opinion to sanction a lawyer specifically for AI hallucinations — and the court published it explicitly “as a warning.”

Attorney Farhan Mostafavi submitted two appellate briefs containing fake citations. The court’s investigation found that nearly all the quotations in his briefing were fabricated, even though most of the cases he cited actually existed. The AI had taken real cases and invented the judicial language. The court imposed a $10,000 sanction payable to the court, referred Mostafavi to the State Bar, required him to show the opinion to his client, and ordered him to certify compliance.

Presiding Justice Lee Smalley Edmon offered what may be the most memorable line in the entire corpus of AI hallucination jurisprudence: “We conclude by noting that ‘hallucination’ is a particularly apt word to describe the darker consequences of AI.”

The case added an important wrinkle: the court also declined to award attorneys’ fees to opposing counsel — because they had noticed the fake citations and failed to alert the court. That created an implicit duty: lawyers who spot hallucinations in an opponent’s brief and stay silent may not be entitled to remedies when sanctions follow.

The lesson: Hallucinations aren’t just in non-existent cases. Real cases with invented quotes are just as dangerous — and harder to catch.


🔴 WHEN MONETARY SANCTIONS WEREN’T ENOUGH


Johnson v. Dunn N.D. Alabama | July 23, 2025 | Sanction: Disqualification + Bar notification

This case represents a significant escalation in judicial response. A large, well-regarded law firm — one that had actually circulated internal warnings about AI risks and prohibited its use without practice group leader approval — found itself sanctioned when a practice group co-leader inserted a hallucinated ChatGPT citation into a motion.

The Northern District of Alabama court declared that monetary sanctions were proving ineffective at deterring false, AI-generated statements of law. Something more was needed. The court disqualified the attorneys from representing the client for the remainder of the case, published the opinion in the Federal Supplement for maximum visibility, and directed the clerk to notify bar regulators in every state where the responsible attorneys were licensed.

The message was unmistakable: judges are losing patience. Written warnings and fines have not stopped the tide. Disqualification and bar referrals are the next tool.

The lesson: Having an AI policy is not the same as enforcing it. Even senior lawyers with good intentions can become the cautionary example.


🔴 THE CASE THAT COST A FIRM $31,100


Ellis George LLP / K&L Gates LLP — Central District of California C.D. Cal. | 2025 | Fine: $31,100 in opposing counsel fees + brief struck

In what became one of the most expensive single AI hallucination sanctions, attorneys from two law firms — including international powerhouse K&L Gates — submitted a brief to Special Master Michael Wilner containing numerous hallucinated citations. The attorneys had used multiple AI tools including CoCounsel, Westlaw Precision, and Google Gemini.

Wilner described the situation as “scary,” noting that he had been “persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”

The sanctions were severe: the brief was struck entirely, the discovery relief sought was denied, the firms jointly paid $31,100 in opposing counsel’s legal fees, and the matter was disclosed to the client.

The lesson: Legal-specific AI tools are not immune to hallucinations. Westlaw Precision and CoCounsel still require verification. No tool gets you off the hook.


🔴 MIKE LINDELL’S LAWYERS: THE $3,000 FINE AND THE COVER-UP THAT MADE IT WORSE


Kachouroff & DeMaster (Lindell v. Coomer) D. Colorado | July 7, 2025 | Fine: $3,000 each

Attorneys Christopher Kachouroff and Jennifer DeMaster, representing MyPillow CEO Mike Lindell in a Colorado defamation case, filed a motion containing more than two dozen mistakes — including hallucinated case citations. Judge Nina Y. Wang of the U.S. District Court in Denver found that the document violated Rule 11’s requirement that legal claims be “well grounded” in the law.

What elevated the sanction was the attorneys’ lack of candour. When questioned about the AI’s role, the lawyers were not forthcoming. Kachouroff eventually claimed DeMaster had filed a draft version by mistake — an explanation the judge did not find persuasive. Judge Wang noted that the $3,000 fine for each attorney was “the least severe sanction adequate to deter and punish defense counsel in this instance.”

The lesson: Honesty after the fact matters. Courts consistently treat disclosure and cooperation as mitigating factors. Evasion transforms a mistake into misconduct.


🔴 MORGAN & MORGAN: THE LARGEST U.S. PERSONAL INJURY FIRM GETS SANCTIONED


Morgan & Morgan — Products Liability Case Federal Court | 2025 | Sanction: Rule 11 violation + opposing counsel fees

Even the largest personal injury law firm in the United States was not immune. Morgan & Morgan’s lawyers used the firm’s own in-house AI platform to locate case citations for motions in limine in a products liability matter. The AI generated citations that did not exist. The lawyers — including two who co-signed the brief without being involved in drafting it — did not verify the citations.

The court found Rule 11 violations against all three lawyers. It noted pointedly that co-signing a document is certification of its accuracy: “signing a legal document indicates that the lawyer read the document and conducted a reasonable inquiry into the existing law.” The drafting lawyer was fined $3,000 and had his temporary pro hac vice admission revoked. The other two lawyers were fined $1,000 each.

Notably, the court credited the attorneys for their transparency and for proactively paying opposing counsel’s fees before sanctions were imposed — reducing what might have been harsher consequences.

The lesson: Co-signatories are personally liable. Signing a brief you didn’t draft or verify is not a defence.


🔴 THE INTERNATIONAL DIMENSION: UK AND CANADA


R (Ayinde) v London Borough of Haringey [2025] EWHC 1040 (Admin) — England

In a judicial review case involving a homeless claimant, the court encountered AI-hallucinated authorities submitted by a regulated solicitor’s firm. The UK High Court’s response was pointed: solicitors cannot delegate professional judgment to an algorithm. The court distinguished sharply between what it would tolerate from litigants in person versus what it expected from regulated professionals — making clear that the latter faced a higher standard of accountability.

Ko v. Li, 2025 ONSC 2766 — Canada (Ontario Superior Court)

In this matrimonial case, lawyer Jisuh Lee’s legal factum contained citations to two cases that simply did not exist. One hyperlink returned a “404 Error – Page not found.” Another case, when read, reached the opposite conclusion from what Lee had argued. Judge Fred Myers ordered Lee to show cause why she should not be cited for contempt of court — one of the most severe judicial responses to AI hallucination anywhere.

Handa & Mallick [2024] FedCFamC2F 957 — Australia

Australian solicitor Dayal submitted AI-generated hallucinated authorities in a family law matter and was disciplined by the Victorian Legal Services Board — prohibited from handling trust money or practising unsupervised for two years. The tribunal was unequivocal: ignorance of a tool’s limitations is no defence. Technological literacy is part of the duty of competence.


🔴 THE REPEAT OFFENDER: GORDON REES


Gordon Rees Scully Mansukhani — Multiple Courts | 2025

Perhaps the most alarming pattern in the hallucination record is a firm appearing multiple times. Gordon Rees became a cautionary example after one of its attorneys submitted fabricated citations in a bankruptcy case, spending over $50,000 to make the situation right and implementing a new “cite-checking policy.” Then it happened again. A December 2025 order from U.S. Magistrate Judge Carolyn Delaney in Villalovos-Gutierrez v. Pol noted that “in more than one other instance, defendant’s case citations do not support the specific explanatory phrase presented alongside the citation.” The court ordered Gordon Rees not to file any documents containing AI-hallucinated citations.

By early 2026, opposing counsel in a separate case filed a reply accusing the firm again, stating: “This is not aggressive advocacy or sloppy research. It is the submission of false authority to the Court in violation of counsel’s most basic obligations.”

The lesson: A firm AI policy means nothing if it isn’t enforced across all matters and all attorneys.


Part IV: The Escalating Scale of Consequences

The range of sanctions courts have imposed, from mildest to most severe, shows a clear escalation:

ConsequenceExamples
Admonishment / warning onlyEarly cases, bereaved attorney (EDNY)
$500–$3,000 fineLindell lawyers, Louisiana case, Morgan & Morgan associates
$5,000–$10,000 fineMata v. Avianca, Noland v. Land of the Free
$31,100+ in opposing feesK&L Gates / Ellis George (C.D. Cal.)
Brief struck + discovery deniedMultiple federal courts 2025
Disqualification from caseJohnson v. Dunn (N.D. Ala.)
Bar referral / disciplinary inquiryJohnson v. Dunn, Noland, multiple UK/Canada cases
Pro hac vice revocationArizona Social Security case, Morgan & Morgan
Practice restrictionsHanda & Mallick (Australia, 2 years supervised)
Contempt proceedingsKo v. Li (Canada)
Dismissal of underlying caseMultiple cases involving pro se litigants

The trajectory is unmistakable: courts started with fines, moved to disqualifications, and are now referring lawyers to bar associations. The next phase may well be malpractice liability — cases where clients whose matters were damaged by AI hallucinations sue the attorneys responsible.


Part V: What the Courts Are Now Requiring

Courts across the United States and internationally are moving from ad hoc sanctions to systematic rules. Key developments as of early 2026:

Mandatory AI Disclosure: At least 11 U.S. states — including Arizona, California, Connecticut, Illinois, New York, and Virginia — have established court policies or rules addressing AI use. Many federal judges have issued standing orders requiring attorneys to certify that any AI-generated content has been verified by a human.

The Texas Model: U.S. District Judge Brantley Starr (N.D. Texas) pioneered what became a template: lawyers must certify either that no generative AI was used, or that any AI-generated content was “thoroughly verified by a human.” Filings without the certification are struck.

Chief Justice Roberts Weighs In: The U.S. Supreme Court’s annual report by Chief Justice John G. Roberts warned that AI “hallucination” can lead to citations to nonexistent cases — the highest-level judicial acknowledgment of the crisis yet.

The Hyperlink Requirement: Predictions for 2026 include courts adopting mandatory hyperlink rules requiring every cited judicial opinion to link to a verifiable source in an official legal database. New York’s Commercial Division has operated this way since 2020. More courts are expected to follow.


Part VI: The 10-Step Verification Checklist

Every lawyer using AI for legal research or drafting needs a written verification protocol. Here is one grounded in the case law above. Treat it as the minimum standard.


✅ THE AI CITATION VERIFICATION CHECKLIST

Step 1 — Retrieve every citation independently Go to Westlaw, LexisNexis, Google Scholar, or the court’s official PACER system. Search the citation. Confirm it exists before anything else.

Step 2 — Verify the case name matches the citation This catches a common variant where the AI attaches a real citation to a fictitious or misattributed case name.

Step 3 — Read the actual holding Don’t just confirm the case exists. Read the relevant portion. Confirm it says what the AI says it says. In Noland, the cases existed — the quotes were invented.

Step 4 — Shepardize or KeyCite every case Confirm the case is still good law and hasn’t been overruled, distinguished, or limited in ways that affect your argument.

Step 5 — Check jurisdiction AI frequently blends jurisdictions, citing federal cases as binding state authority or importing law from the wrong circuit. Confirm the case is authoritative in your jurisdiction.

Step 6 — Cross-reference every statute and regulation Verify cited statutes and regulations against the official government text. Check that section numbers, dates, and versions are current.

Step 7 — Audit every quote If the AI has generated a quote attributed to a judge or statute, verify the exact language in the original source. AI-invented judicial language is one of the most dangerous hallucination variants.

Step 8 — Run a dedicated citation-checking tool Tools like LawDroid’s CiteCheck AI, Casetext, and built-in verification features in Clio Work and Westlaw AI can scan your entire document for invalid citations. Use them as a final safeguard — not a substitute for steps 1–7.

Step 9 — Have a second set of eyes review AI-assisted filings Even if you’ve verified every citation yourself, have a colleague review any document substantially drafted with AI assistance before it’s filed. This mirrors the supervision standard courts apply to associate work product.

Step 10 — Document your verification process Keep a log of what you verified, when, and using what database. If a citation is challenged, a paper trail demonstrating reasonable inquiry is your strongest defence. Some courts have credited documented verification as a mitigating factor in determining sanctions.


🚨 NEVER DO THESE

  • Do not ask the AI to verify its own citations. It will confirm they exist. It is wrong.
  • Do not assume that legal-specific AI tools like CoCounsel or Westlaw AI are hallucination-free. Stanford research found error rates of 17% for Lexis+ AI and 34% for Westlaw AI-Assisted Research. Every output still requires human verification.
  • Do not co-sign a brief you haven’t reviewed. Your signature is your certification.
  • Do not rely on opposing counsel to catch your errors. Courts have begun scrutinising whether opposing counsel who spotted hallucinations and stayed silent deserves remedies.
  • Do not attempt to conceal AI involvement once a problem is flagged. Transparency is consistently treated as a mitigating factor. Evasion is treated as an aggravating one.

Part VII: The Professional Responsibility Framework

The legal duty that underlies every sanction in this list is Model Rule 1.1: Competence. Most U.S. states have adopted Comment 8 to this rule, which requires lawyers to keep abreast of changes in the law “including the benefits and risks associated with relevant technology.”

Using AI without understanding its limitations is a competence failure. Signing a filing containing hallucinated citations — whether you drafted it or not — is a Rule 11 violation. Failing to disclose known errors is a candour to the tribunal issue under Rule 3.3.

The ABA formalised its position in Formal Opinion 512 (July 2024), establishing that lawyers using generative AI must have a “reasonable understanding” of its capabilities and limitations, must supervise its use, and must comply with professional obligations regardless of whether AI was involved.

The Bar Council of England and Wales has stated plainly that blind reliance on AI risks gross negligence. Australia’s Victorian Legal Services Board has already imposed multi-year practice restrictions for hallucination failures. The trajectory globally is toward treating AI negligence as a disciplinary matter, not merely a procedural one.


Conclusion: The Machine May Hallucinate. You May Not.

That phrase, borrowed from the Lexology analysis of the UK High Court decision, is the most concise statement of where the law stands.

AI tools are not going away. The lawyers getting the best results are using them. The lawyers getting sanctioned are using them carelessly — and the consequences have escalated from embarrassment to fines, from fines to disqualification, from disqualification to bar referrals.

Close to a thousand cases. Growing daily. Courts that have run out of patience. A Chief Justice who has named the problem in his annual report. Bar associations beginning to treat it as a disciplinary issue. Clients beginning to ask whether they have malpractice claims.

The verification checklist in Part VI takes approximately 15–30 minutes per filing. That is the investment. The alternative is measured in sanctions, reputational damage, bar proceedings, and in the worst cases, client harm.

Verify every citation. Every time. Without exception.


📌 Bookmark this page — it is updated as significant new cases are decided.

If you know of a case that should be included in this tracker, or if a case detail has changed, please let us know in the comments.

Subscribe to [Your Blog Name] for weekly analysis on AI in legal practice — delivered to lawyers who take their professional obligations seriously.


Further Reading & Resources

  • Damien Charlotin’s AI Hallucination Cases Database: damiencharlotin.com/hallucinations
  • ABA Formal Opinion 512 on Generative AI (July 2024)
  • National Center for State Courts: A Legal Practitioner’s Guide to AI & Hallucinations
  • LawDroid CiteCheck AI (free citation verification tool)
  • Generative AI Federal and State Court Rules Tracker (LexisNexis Practical Guidance)

Disclaimer: This article is for educational and informational purposes only. It does not constitute legal advice. Case summaries are based on publicly reported decisions and should not be relied upon as primary legal authority. Always consult primary sources and your bar association’s guidance on AI use.

Leave a Comment

Your email address will not be published. Required fields are marked *