top of page
Search

California’s Warning Shot: Lawyers Can’t Outsource Responsibility to AI

  • Writer: Niki Black
    Niki Black
  • Feb 11
  • 3 min read

Here is my recent Daily Record column. My past Daily Record articles can be accessed here.


****

California’s Warning Shot: Lawyers Can’t Outsource Responsibility to AI


Unless you’ve been hiding under a rock, generative artificial intelligence (AI) is on your radar. And, if you’re like the majority of legal professionals, you’ve probably taken it for a test drive. If you have, then you’re undoubtedly aware of the speed at which it works. Enter a query, and within seconds you’ll receive an answer, a summary, or even a draft legal document. 


Those responses will appear at first glance to be impressive: thorough, well-written, and authoritative. But look closer, and you’ll quickly realize that things aren’t quite what they seem. There are leaps in logic, faulty analysis, and even made-up information. 


That’s why taking a second look at AI-generated output is so important. Unfortunately, judging from the headlines, many attorneys are accepting at face value—and even submitting to the court—that first draft.


Case in point, a recent California decision in which the judge was less than pleased by an attorney’s failure to carefully review material cited in an AI-generated submission to the court. In N.Z. v. Fenix Int’l Ltd., 8:24-cv-01655-FWS-SSC (C.D. Cal. December 25, 2025), the court determined that sanctions were appropriate because the attorney “used ChatGPT to assist in drafting the opposition briefs but failed to verify the validity of the AI-generated material … and failed to realize when, and to what event, ChatGPT was modifying her research/writing — supplementing and/or cross-pollinating concepts and authorities.”


In response to situations like these, the California Senate passed a bill at the end of January that is intended to regulate attorneys’ AI usage. The Bill addresses a number of issues, including the steps lawyers must take when using generative AI in their practices. It requires lawyers to take reasonable steps to verify the accuracy of all materials produced by generative AI, correct any erroneous or hallucinated output, and remove any biased or harmful content. This requirement applies equally to output generated for an attorney or on their behalf.

The Bill requires that a “brief, pleading, motion, or any other paper filed in any court shall not contain any citations that the attorney responsible for submitting the pleading has not personally read and verified, including any citation provided by generative artificial intelligence.”

It also addresses arbitrator’s obligations when using AI, emphasizing that delegating any part of decision-making to AI is not permitted. It requires arbitrators to make appropriate disclosures to all parties prior to “relying on information generated by generative artificial intelligence outside the record.”  Additionally, they must “assume responsibility for all aspects of an award, regardless of any use of generative artificial intelligence tools to assist with the decision-making process.”

The bill has been sent to the Assembly for review, so stay tuned to see if it passes. Regardless of what happens, this piece of legislation likely won’t be the last. Courts across the country are struggling to address the growing problem of AI-generated legal briefs that contain inaccurate and outright fake information. Legislation may not always be the chosen path, but it’s undoubtedly an approach that some jurisdictions may take.


Judges and legislators are making it very clear that outsourcing legal work, including research, to AI without verification is not permissible. AI does not change your professional obligations or ethical duties; the responsibility for the research, analysis, and final work product remains with you, the attorney.


You can absolutely use AI. You absolutely should use AI. You just can’t hand it the wheel and close your eyes. Choose to be part of the solution, not the problem, by carefully reviewing and correcting all AI-created legal documents. 


Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at 8am, the team behind MyCase, LawPay, CasePeer, and DocketWise.She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.


 
 

Recent Posts

See All
Questions to ask AI and Cloud Providers

Questions to ask AI providers* What is your AI’s core technology and architecture? You want to understand what type of AI model the vendor uses, whether it’s deep learning, reinforcement learning, sup

 
 

©2018 by Nicole Black.

bottom of page