Judge slams lawyers for ‘bogus AI-generated research’


A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with “numerous false, inaccurate, and misleading legal citations and quotations.” In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying “no reasonably competent attorney should out-source research and writing” to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.
“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Milner writes. “That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”
As noted in the filing, a plaintiff’s legal representative for a civil lawsuit against State Farm used AI to generate an outline for a supplemental brief. However, this outline contained “bogus AI-generated research” when it was sent to a separate law firm, K&L Gates, which added the information to a brief. “No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Judge Milner writes.
When Judge Milner reviewed the brief, he found that “at least two of the authorities cited do not exist at all.” After asking K&L Gates for clarification, the firm resubmitted the brief, which Judge Milner said contained “considerably more made-up citations and quotations beyond the two initial errors.” He then issued an Order to Show Cause, resulting in lawyers giving sworn statements that confirm the use of AI. The lawyer who created the outline admitted to using Google Gemini, as well as the AI legal research tools in Westlaw Precision with CoCounsel.
This isn’t the first time lawyers have been caught using AI in the courtroom. Former Trump lawyer Michael Cohen cited made-up court cases in a legal document after mistaking Google Gemini, then called Bard, as “a super-charged search engine” rather than an AI chatbot. A judge also found that lawyers suing a Colombian airline included a slew of phony cases generated by ChatGPT in their brief.
“The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong,” Judge Milner writes. “And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way.”
What's Your Reaction?






