New York Joins the Push for Ethical AI Integration in the Courts
- Niki Black
- Oct 20
- 3 min read

Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
New York Joins the Push for Ethical AI Integration in the Courts
In a recent column, I highlighted how courts around the country are addressing generative artificial intelligence (AI) adoption by their employees. AI is impacting all aspects of work, and the judiciary is recognizing the value it can bring to the work performed by those who dispense justice.
For example, the Illinois Supreme Court’s AI policy, released earlier this year, encouraged AI understanding and emphasized the importance of carefully vetting AI tools and supervising their use by court personnel.
Other approaches include the Arizona and Pennsylvania Supreme Courts' recent rollout of AI initiatives, which focused on the ethical and responsible use of AI in the courts. In August, the Arizona Supreme Court amended the Rules of Judicial Conduct to require technology competence. The Supreme Court of Pennsylvania implemented an “Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel” in September, which permitted AI use and provided guidance on its ethical implementation.
Now, New York State has joined their ranks, issuing the “New York State Unified Court System Interim Policy On The Use Of Artificial Intelligence,” which became effective October 2025.
The stated goal of the policy is to “promote the responsible and ethical use of AI technology” in the courts. It applies to all judges and nonjudicial employees of the UCS and covers “all functions performed on a UCS-owned device, and to all UCS-related work performed on any device.”According to the policy, court personnel are responsible for all AI output. They must carefully review content created by AI tools, and “make necessary revisions to ensure that it is accurate and appropriate, and does not reflect any unfair bias, stereotypes, or prejudice.” In other words, AI can assist, but it cannot replace human judgment.
Only approved tools are permitted, including Microsoft Azure AI Services, Microsoft Copilot 365 Chat, GitHub Copilot for Business or Enterprise, Trados Studio, and the OpenAI ChatGPT Free Version. Notably, paid ChatGPT versions are prohibited, most likely to limit access to advanced features, integrations, and account variations that could complicate oversight or increase data exposure risks.
The policy also requires that court employees complete a required training course before using any AI tool for work-related tasks, ensuring both technical competence and an understanding of applicable ethical obligations.
“Confidential information” is defined broadly, and includes “docket numbers, party names, addresses, and dates of birth.” It also prohibits the uploading of all court filings, public or not, into a non-private model.
Confidential information may only be input into private AI models, which are defined as “ a model that is under UCS control and does not share data with any public LLM.” This restriction reinforces the courts’ focus on data security and the protection of sensitive information.
These recent developments highlight just how pervasive AI has become in the three short years since the initial release of ChatGPT in November 2022. Courts across our nation are not only aware of AI—they’re actively encouraging its implementation, even within their own ranks. This level and pace of technology adoption by judicial bodies are unprecedented and are a testament to AI's ubiquity and utility.
No matter how you look at it, the disruption caused by AI is on par with the advent of personal computing and the Internet. Both of those technologies shaped the world we live in, transforming the ways that we engage with it, both personally and professionally.
AI promises to do the same. If its potential is ultimately fully realized, access to legal services will be significantly increased, legal work will be more impactful, and justice will be more readily served.
The courts' increasing recognition of AI is one more sign that it is poised to profoundly impact our system of justice. Whether its tremendous potential comes to fruition remains to be seen, but its adoption by courts is a trend worth tracking.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at 8am, the team behind 8am MyCase, LawPay, CasePeer, and DocketWise.She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.

