Courts Adapt to the Challenges of Generative AI
- Niki Black
- Oct 6
- 3 min read

Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
The ubiquity of generative artificial intelligence (AI) is dramatically impacting the practice of law. Many consumer-grade tools regularly used by legal professionals have built-in AI features, including Microsoft and Google products. Similarly, most legal software providers are also integrating AI into their products. From legal research and contract analysis to law practice management and billing systems, there’s no escaping embedded AI functionality and its benefits—and drawbacks.
AI is changing how legal work gets done, and the effects aren’t limited to law offices. Other legal organizations are equally impacted, including the courts. As judicial offices around the country grapple with the how and why of secure AI adoption, new rules, policies, and processes are being implemented to address the ethical and practical issues presented.
For example, earlier this year, I wrote about the Illinois Supreme Court’s progressive AI policy, which encouraged AI understanding and highlighted the importance of carefully vetting AI tools and supervising their use by court personnel.
More recently, the Arizona and Pennsylvania Supreme Courts also took steps to ensure ethical and responsible AI usage in the courts.
The Arizona Supreme Court tackled the issue by amending the Rules of Judicial Conduct after being approached by a member of the Arizona Steering Committee on Artificial Intelligence and the Courts. On behalf of the Committee, he filed a petition seeking the amendment of Rule 2.5 of the Arizona Code of Judicial Conduct.
After considering the request, the Court issued an Order dated August 27, 2025, which amended Comment 1 to Rule 2.5, which addresses competence, diligence, and cooperation.
The effective date of the change will be January 1, 2026, at which point the following language (italicized for emphasis) will be added to the end of comment: “1. Competence in the performance of judicial duties requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary to perform a judge’s responsibilities of judicial office, including the use of, and knowledge of the benefits and risks associated with, technology relevant to service as a judicial officer.”
The Supreme Court of Pennsylvania took a different approach and implemented an “Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel.”
The policy allows the use of “GenAI for work only as set forth in this Policy.” Leadership-approved AI tools are permissible, and use cases include summarizing documents, drafting communications or memoranda, conducting preliminary legal research, improving readability of public documents, and assisting self-represented litigants.
According to the policy, court personnel remain accountable for accurate output and must comply with ethical and professional rules when using AI: “Personnel must understand the limitations of GenAI tools and review GenAI output for accuracy, completeness, and potentially biased or inaccurate content.”
A key provision in the policy distinguishes between “secured” and “non-secured” AI systems. Confidential or non-public information may only be entered into secure tools that guarantee privacy; unsecured, unapproved platforms are strictly prohibited. Finally, court leadership is responsible for contract review and enforcement to ensure compliance with the policy.
As courts continue to grapple with rapid technological advancement, judicial efforts to provide guidance for ethical AI adoption will only increase. The steps taken by Illinois, Arizona, and Pennsylvania’s judiciary are a great bridge, but more proactive, coordinated efforts will be required to ensure the integrity of our courts.
AI-generated deepfake evidence is surfacing in trials, and fake case citations have appeared in published court opinions. Educated, technologically savvy judges and personnel are the only way to combat AI's growing risks, which threaten to blur the line between reality and fiction in daily life and in the administration of justice. The judiciary’s recent policies are an important start, but ongoing vigilance and adaptation will be essential to preserve the integrity of the justice system.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at 8am, the team behind 8am MyCase, LawPay, CasePeer, and DocketWise.She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.


