NYC on the Ethics of Using AI Tools During Client Meetings
- Niki Black
- 16 minutes ago
- 4 min read

Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
NYC on the Ethics of Using AI Tools During Client Meetings
Whether you realize it or not, you’re likely using tools in your law firm that incorporate artificial intelligence (AI) functionality. Whether it’s your firm’s word processing, email, document drafting, or practice management software, AI is probably running in the background, streamlining your workflows and assisting with the creation of work product.
These days, it’s difficult to escape the benefits—and impact—of AI tools. And, rest assured, you're not the only one using them. Your clients are, too, and you might not even know it.
With this increased AI adoption comes a host of ethical dilemmas, and bar associations across the country have risen to the occasion, offering guidance for our profession.
Most recently, the New York City Bar Association weighed in with insights on the ethical use of AI tools during client meetings. Specifically, the question answered last month in Formal Opinion 2025-6 was: What ethical issues should attorneys consider when using, or when clients use, AI-enabled communications tools that can record, transcribe, and summarize conversations with clients?
The opinion addressed a number of issues, including avoiding deceit and misrepresentation, technology competence, confidentiality, and privilege. Most notably, the Professional Ethics Committee offered insight into how to address ethical obligations when clients use these tools.
First, the Committee tackled the obligation under Rule 8.4 to be forthright with clients, explaining that, as is the case with more traditional meeting recording mechanisms, clients must be informed when AI-enabled meeting tools are used. Because clients may speak differently if they know they are being recorded…(they) must be notified, and their consent obtained, whenever their calls are being recorded by an AI-empowered system.”
Next, the Committee considered technology competence requirements and the privacy and security implications of these tools. According to the Committee, lawyers must familiarize themselves with the product’s functionality and carefully vet the providers to ensure they have a full understanding of the “privacy and security safeguards…in place in an AI tool to protect the data, including where data will be stored and for how long, how data might be retrievable through discovery, whether the tool uses such data for training, and whether there is a right to data deletion” and must advise clients “of the risks of the loss of confidentiality and privilege, particularly when clients are using their own AI tools.”
The Committee advised lawyers to take steps to mitigate the risks to privilege that may arise when clients use these tools. Specifically, they “should include provisions in retainer agreements stating that any recordings, transcripts, or summaries prepared by AI tools selected or used by the client will not be deemed dispositive or binding as against the attorneys unless they are promptly provided to the attorneys so that the attorneys may conduct independent reviews of the accuracy of these materials.”
This obligation to carefully review the output of AI meeting tools also applies equally to information created for lawyers: “Recordings and summaries may be relied upon, sometimes even years later, and the attorney should therefore review them soon after they are prepared and ensure that any necessary revisions are made. This includes carefully reviewing any informal advice that the attorneys may have offered without adequate time for full reflection, but which may take on greater weight because of the memorialization of the advice in a written transcript or summary.”
This opinion comes at an interesting moment in time, as AI is rapidly becoming commonplace in law firms. With the proliferation of this technology, there is an accompanying need to ensure that lawyers have roadmaps for ethical adoption.
That being said, while I appreciate the New York City Bar’s consistent willingness to lead the way with ethics guidance about emerging technologies, I’m not convinced that the majority of this opinion was necessary.
Many of the issues that arise when lawyers use these AI meeting tools are not novel, and prior opinions offered advice that is easily applied to this new technology. However, the client-side issue is more unique, and the guidance provided to ensure client and attorney interests are protected was necessary and helpful.
Over time, AI will become as familiar as word processing software, and concerns about its safety and impact will dissipate, especially as ethics committees address all perceived issues triggered by its implementation in law firms. Until then, we can expect continued guidance—some necessary and some not—that ensures lawyers carefully consider the ramifications of AI adoption.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at 8am, the team behind MyCase, LawPay, CasePeer, and DocketWise.She is the nationally-recognized author of "Cloud Computing for Lawyers" (2012) and co-authors "Social Media for Lawyers: The Next Frontier" (2010), both published by the American Bar Association. She also co-authors "Criminal Law in New York," a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.

