top of page
Search

New York Expands Judicial Guidance on Generative AI

  • Writer: Niki Black
    Niki Black
  • Feb 2
  • 4 min read

Here is my recent Daily Record column. My past Daily Record articles can be accessed here.


****


New York Expands Judicial Guidance on Generative AI


The ubiquity of generative artificial intelligence (AI) is undeniable. Even if you’re not yet using it, according to data from the soon-to-be-released 8am Legal Industry Report, the majority of legal professionals are now personally using AI for work-related purposes. 


This level of adoption has had a significant impact on our justice system, and bar associations across the country have responded by issuing ethics opinions and guidance at a rapid clip.


After all, it’s not just practicing lawyers and litigants who are affected by AI usage; it’s judges and their staff as well. To address this issue, the “New York State Unified Court System Interim Policy On The Use Of Artificial Intelligence” was issued last October. As I explained in an article that I wrote that month, the need for that guidance was evidence of the pervasiveness of this technology.


AI’s use in the judicial system was addressed yet again in December, when the Advisory Committee on Artificial Intelligence and the Courts issued its inaugural Annual Report to the Chief Judge and Chief Administrative Judge of the State of New York.


The Committee explained that the report reflects its “commitment to exploring the multifaceted impact of AI on court operations, legal practice, and access to justice.” And explore, it does. That document is a whopping 154 pages long. The report itself is 26 pages long, with the remaining pages comprising its 9 appendices. 


It covers a lot of ground, including how generative AI is being used and governed within the court system, as well as its impact on policy development, court operations, access and equity, evidence reliability, legal practice, and professional responsibility. 


According to the Committee, its goals were to support responsible innovation by encouraging judges and their staff to embrace “the efficiencies and improvements AI can offer, while remaining vigilant about the risks of bias, inaccuracy, and ethical compromise and proposing appropriate guardrails.”


One important part of their work was to support the drafting of a document titled “Ethical Considerations and Recommendations for the Use of Artificial Intelligence by Judges and Judicial Staff,“ which supplemented October’s Interim Policy, providing foundational principles for future judicial guidance on AI use in the courts. 


The document, found in Appendix 9, was drafted by a subcommittee, which explained that the new report “integrates the UCS Interim Policy on the Use of Artificial Intelligence (October 2025) with basic principles of judicial ethics to provide general ethical parameters for the use of AI by judges and judicial staff.”


Rather than issue formal guidelines, the subcommittee included general considerations and recommendations, leaving it to the UCS leadership or the ACJE to offer ethical guidance on AI by drafting opinions as needed.


At the outset, the subcommittee acknowledged that the rapidly evolving nature of AI technology precluded the application of “a strict set of rules to govern every use.” Instead, the chosen approach was to provide guidance that was broad enough to withstand the test of time.


Some of the most notable recommendations in the report focused on the thoughtful use of AI to supplement and streamline the work of judges and their staff. For example, it was emphasized that AI is not a replacement for judicial reasoning and discretion, neither of which can be delegated to AI tools. Importantly, judges remain “solely responsible for the content of all decisions.” Similarly, AI cannot be used “to conduct independent factual investigations or to supplement the evidence in the record.”


The subcommittee also highlighted that continuing to learn about technology is an important part of competence requirements, and judges “must maintain sufficient knowledge of AI tools to competently evaluate and supervise their use.”


Other notable recommendations revolved around the obligation to carefully review all AI output to ensure “legal accuracy, factual reliability, and neutrality” and the avoidance of “biased, arbitrary, or unverified outcomes.”Finally, the subcommittee addressed how to appropriately incorporate AI into legal research workflows, explaining that “AI tools may be used to assist with traditional legal research but should not replace it,” and all output should be “verified using authoritative sources.”


The release of this report is clear evidence that AI is having a noticeable impact on the administration of justice, and thus, understanding this technology is no longer optional. By tying the report’s recommendations to existing ethics principles and focusing on real-world use rather than rigid rules, the Committee produced guidance that is practical and easy to apply, offering the judiciary a helpful roadmap for adapting to an AI-powered reality while ensuring that fundamentals of judgment, accountability, and public trust are necessarily and carefully preserved.


Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at 8am, the team behind 8am MyCase, LawPay, CasePeer, and DocketWise.She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.






 
 

©2018 by Nicole Black.

bottom of page