Legal Ethics / Technology & Law / Professional Responsibility
In recent years, the legal profession has seen a surge in the use of generative AI for legal research and drafting. While these tools offer remarkable efficiency and convenience, they also pose serious risks—most notably, the generation of fabricated or “hallucinated” case law. Several high-profile cases have emerged in which lawyers filed briefs citing non-existent decisions, often discovered only when a court or opposing counsel was unable to find the case—sometimes even returning a “404 Not Found” result when searching official court databases.
This troubling trend has implications for legal ethics, technological competence, and the integrity of the judicial process. This article examines the phenomenon, recent disciplinary actions, and the responsibilities lawyers bear when using AI tools in their practice.
The Rise of “404 Law”: Recent Incidents
1. Utah Appeals Court Sanction – Richard Bednar (2025)
In one of the most recent cases, Utah attorney Richard Bednar was sanctioned after submitting a brief containing multiple fictitious citations, including “Royer v Nelson.” The brief was prepared by an unlicensed clerk using ChatGPT. The Utah Court of Appeals, upon discovering the false citations, ordered Bednar to pay the opposing party’s legal fees, reimburse his client, and donate to a legal aid charity. The incident emphasized that lawyers must verify the accuracy of every claim they submit, regardless of who prepares the document.
2. British Columbia Family Law Hearing (2024)
In Canada, a family lawyer in British Columbia was investigated by the provincial law society after citing AI-generated case law in a custody hearing. The citations, presented as legitimate legal precedent, were fabricated. The lawyer acknowledged using ChatGPT for research without verifying the results, prompting the Law Society to issue guidance reinforcing due diligence obligations when using AI tools.
3. Federal Court Filing in New York (2023)
Perhaps the most widely reported example occurred when a lawyer in a federal case cited six fictional cases—including “Varghese v. China Southern Airlines”—in a personal injury brief. When questioned by the judge, the lawyer admitted the citations were provided by ChatGPT. The judge labeled the event “unprecedented,” and sanctions followed.
Why AI “Hallucinates” Case Law
Generative AI models like ChatGPT are trained on vast datasets, including publicly available legal documents. However, they do not have real-time access to verified court databases such as Westlaw or LexisNexis. Instead, they generate text based on patterns, which can lead to realistic-sounding but entirely invented citations, quotes, and rulings. These hallucinations are not easily detected unless the user verifies the source independently.
Ethical and Professional Implications
The American Bar Association (ABA) and other legal bodies around the world have warned that while AI can support legal work, it does not relieve lawyers of their core duties:
- Duty of Competence: Lawyers must understand how AI tools work and their limitations.
- Duty of Candor to the Court: Submitting fictitious authorities—even inadvertently—can amount to a breach of professional ethics.
- Supervision of Non-Lawyers: Lawyers are responsible for work done by assistants, law clerks, or technology acting under their direction.
As of 2025, many jurisdictions are now including AI usage as a factor in disciplinary proceedings, especially when it leads to false or misleading filings.
Best Practices for Lawyers Using AI
To avoid the dangers of “404 briefs” and hallucinated case law, legal professionals should:
- Verify Every Citation: Always cross-check case names, citations, and quotes using trusted legal databases.
- Use AI for Ideas, Not Authority: Let AI assist with drafting structure or issue spotting—but not with authoritative research.
- Disclose When Appropriate: Some lawyers now voluntarily disclose when a filing includes AI-assisted content to maintain transparency.
- Stay Updated on Ethics Guidance: Bar associations globally are developing evolving standards on AI use in legal practice.
Looking Ahead: Regulation and Reform
The legal industry is now grappling with how to regulate AI usage. Proposals include:
- Mandatory AI-use disclosures in court filings,
- Integration of AI verification tools into legal research platforms,
- Updated continuing legal education (CLE) requirements covering tech competence and AI literacy.
Courts may also begin requiring certifications that all authorities cited have been reviewed by a human attorney, particularly in jurisdictions where AI misuse has already led to sanctions.
Conclusion: Challenge to the Credibility and Reliability
The intersection of AI and law holds great promise but also significant peril. The appearance of “404 errors” in legal briefs and the spread of fake case law is not just a technical glitch; it’s a challenge to the credibility and reliability of the legal system.
As AI tools become more powerful, lawyers must remain vigilant stewards of the truth, ensuring that every argument they make is grounded not in generated guesswork, but in verifiable law.