Back to Headlines
World AI Analysis

Penalties stack up as AI spreads through the legal system

AI
AI Legal Analyst
April 3, 2026, 9:33 AM 7 min read 1 views

Summary

National Penalties stack up as AI spreads through the legal system April 3, 2026 5:00 AM ET Martin Kaste Carla Wale, the director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics training for law school students. hide caption toggle caption When it comes to using AI, it seems some lawyers just can't help themselves. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn't seem to have had much effect. "Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI. "We have this issue because AI is just too good — but not perfect," he says. When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated. "Whatever the generative AI tool gives you — as in, 'Look at these cases' — you, under the rules of professional conduct, you have to read those cases.

## Summary
National Penalties stack up as AI spreads through the legal system April 3, 2026 5:00 AM ET Martin Kaste Carla Wale, the director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics training for law school students. hide caption toggle caption When it comes to using AI, it seems some lawyers just can't help themselves. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn't seem to have had much effect. "Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI. "We have this issue because AI is just too good — but not perfect," he says. When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated. "Whatever the generative AI tool gives you — as in, 'Look at these cases' — you, under the rules of professional conduct, you have to read those cases.

## Article Content
National
Penalties stack up as AI spreads through the legal system
April 3, 2026
5:00 AM ET
Martin Kaste
Carla Wale, the director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics training for law school students.
hide caption
toggle caption
When it comes to using AI, it seems some lawyers just can't help themselves.
Last year saw a rapid increase in court sanctions against attorneys for filing briefs containing errors generated by artificial intelligence tools. The
most prominent case
was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations.
But as a cautionary tale, it doesn't seem to have had much effect.
"Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a
worldwide tally
of instances of courts sanctioning people for using erroneous information generated by AI.
"We have this issue because AI is just too good — but not perfect," he says.
The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.
Penalties are also on the rise, he says. A federal court may have set a new record last month with an
order for a lawyer in Oregon to pay $109,700 in sanctions and costs
for filing AI-generated errors.
The professional embarrassments even take place at the level of state supreme courts.
Nebraska's high court grilled Omaha-based attorney Greg Lake in February about a brief containing citations of fictitious cases.
He told the justices
he'd mistakenly uploaded a working draft from a computer that subsequently malfunctioned, and he denied he'd used AI. They weren't convinced and
referred him to be disciplined
. In March, a similarly awkward scene played out in the
Georgia Supreme Court
.
"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled.
"I don't think there is a consensus beyond, 'You have to make sure it's correct.' And so for us, that is the baseline," she says.
When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated.
"Whatever the generative AI tool gives you — as in, 'Look at these cases' — you, under the rules of professional conduct,
you
have to read those cases. You have to read the cases to make sure what you are citing is accurate," Wale says — tapping her desk for emphasis.
Some courts have gone further, setting more expansive ethics rules that require lawyers to
label anything they produced with AI
, with details. The goal is to make it easier to know which briefs to double-check for hallucinations and to maintain a visible line between what's generated by humans and what's not.
"I think [labeling rules] are well-intentioned and are going to get swamped as useless pretty quickly," says lawyer-turned-journalist Joe Patrice. He's senior editor for the website
Above the Law
, where he writes about how, as he sees it, AI tools are being "forced" into almost all the software that lawyers use.
"It's going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, 'Hey, this is AI assisted,' at which point it kind of becomes a useless endeavor," he says.
Patrice says AI is undeniably useful for combing through vast amounts of evidence or case law, or handling contracts. But he's leery of the next generation of products being marketed to lawyers — the "agentic" systems that offer to do legal jobs from start to finish.
"I think once you obscure those middle steps, that's where mistakes happen. And even people who are well-meaning and not lazy will lose things because they weren't involved in that process," he says.
More broadly, as AI tools speed up certain time-consuming tasks, he says, they threaten the traditional law firm business model of billable hours.
"There are two options. The lawyers can agree to take less — pause for laughter — or they can start finding a new way to bill. And I think they will probably begin a process of billing for the item," Patrice says.
If that happens, it may ratchet up the time pressure on lawyers and make it more tempting for them to accept the first draft of what AI spits out.
"And then it's a real question: Do you slow yourself down to have that natural thinking time?" Patrice asks. "Future generations who grow up in a world where this is always a reality, do they know to stop and think the

---

## Expert Analysis

### Merits
N/A

### Areas for Consideration
- But as a cautionary tale, it doesn't seem to have had much effect. "Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI. "We have this issue because AI is just too good — but not perfect," he says.
- If that happens, it may ratchet up the time pressure on lawyers and make it more tempting for them to accept the first draft of what AI spits out. "And then it's a real question: Do you slow yourself down to have that natural thinking time?" Patrice asks. "Future generations who grow up in a world where this is always a reality, do they know to stop and think the problem through?
- And that's a worry." Wale shares Patrice's concern about the potential erosion of future lawyers' analytical skills.

### Implications
- A federal court may have set a new record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.
- And even people who are well-meaning and not lazy will lose things because they weren't involved in that process," he says.
- And I think they will probably begin a process of billing for the item," Patrice says.
- If that happens, it may ratchet up the time pressure on lawyers and make it more tempting for them to accept the first draft of what AI spits out. "And then it's a real question: Do you slow yourself down to have that natural thinking time?" Patrice asks. "Future generations who grow up in a world where this is always a reality, do they know to stop and think the problem through?

### Expert Commentary
This article covers lawyers, law, legal topics. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 1018.
lawyers law legal generated court cases courts patrice

Related Articles