13 minute read

Legal AI: ABA & State Legal Ethics Guidance on Artificial Intelligence

The legal profession is experiencing a technological transformation unlike any in its history, with artificial intelligence (AI) now capable of drafting documents, analyzing case law, reviewing contracts, and even predicting litigation outcomes. In a short time, AI has emerged as both a powerful resource and a source of ethical complexity for practicing attorneys. 

The integration of AI into legal work represents a fundamental shift in how legal services can be delivered. Attorneys now face critical questions about maintaining competence, protecting client confidentiality, properly supervising AI systems, verifying AI-generated work, and charging appropriately for AI-assisted services. 

Recent high-profile incidents demonstrate the very real risks of improper AI use and highlight the need for clear ethical guidance. In Mata v. Avianca, attorneys were sanctioned for submitting a brief containing fabricated judicial decisions generated by ChatGPT. And in the United States v. Cohen case, the Southern District of New York criticized an attorney for citing three cases that were “hallucinated” by Google Bard (now Gemini). 

As AI tools become increasingly sophisticated and widely adopted, bar associations nationwide are developing guidance to help attorneys navigate this new terrain while maintaining their ethical obligations. This guidance ranges from comprehensive frameworks to preliminary committees, creating a patchwork of state bar AI rules, recommendations and opinions for attorneys that vary widely in terms of maturity and comprehensiveness.


The Ethical AI Framework

The integration of artificial intelligence in legal practice raises questions that strike at the core of law firms and their attorneys’ ethical duties such as: 

  • How can lawyers maintain competence when using technologies they may not fully understand? 
  • How can client confidentiality be protected when data might be processed by third-party AI systems? 
  • Who bears responsibility when AI tools produce inaccurate information that impacts client representation?

The American Bar Association has taken a leading role in addressing these concerns through its Formal Opinion 512, providing a foundation for ethical AI use that many state bars have built upon. Released in 2023, this opinion applies existing Model Rules of Professional Conduct to the emerging challenges of generative AI, offering a thoughtful framework that balances technological innovation with ethical obligations.

The ABA's guidance emphasizes that while AI tools can enhance legal practice, they cannot replace the professional judgment and responsibility of human attorneys. Instead, the opinion establishes that lawyers must approach AI as they would any other tool—with adequate understanding, appropriate supervision, and ultimate accountability for all output. 

ABA Model Rule 1.1: Competence in the AI Era

Formal Opinion 512 clarifies that a lawyer's duty of competence extends to the use of generative AI. This obligation requires attorneys to understand both the legal and technical aspects of the AI tools they employ:

  • Attorneys must have a reasonable understanding of how AI technology works, though not necessarily full technical competence.
  • The duty is ongoing and requires attorneys to stay vigilant about developments in AI technology.
  • Attorneys can rely on experts to provide guidance when necessary, but cannot delegate their professional responsibility.

The opinion states that attorneys must ensure "competent use of technology, including the associated benefits and risks, and apply diligence and prudence with respect to facts and law." This framework establishes that while AI can enhance legal practice, it cannot replace the trained judgment of an attorney.

Confidentiality and Client Data

Perhaps the most critical ethical concern involves the protection of client confidentiality. AI systems vary widely in how they handle data:

  • "Open" systems may use inputted information to train their models or share it with third parties.
  • "Closed" systems keep information within protected databases.
  • Terms of Use for AI platforms can change frequently, requiring ongoing monitoring.

Attorneys must be particularly cautious about inputting confidential client information into AI systems without appropriate security measures and, when necessary, client consent. The ABA guidance emphasizes the need for attorneys to understand the data practices of any AI tool they use and to implement appropriate safeguards.

For instance, while general-purpose AI systems like ChatGPT may store user inputs to improve their models, potentially creating confidentiality risks, specialized legal AI tools like Steno’s Transcript Genius are designed with attorney-client privilege in mind. Transcript Genius adheres to SOC2 Type II and HIPAA compliance standards (visit our Trust Center for more information), ensuring that sensitive deposition content remains secure. 

Unlike general AI platforms, Transcript Genius was designed by litigators for litigators and explicitly does not retain data after sessions conclude nor use client conversations to train its AI models. This crucial distinction means that when attorneys analyze a sensitive medical malpractice deposition or review confidential settlement discussions using Transcript Genius, they can be confident their client's information won't be stored or repurposed for future AI training—a security guarantee that general-purpose AI platforms typically can't provide.

Supervision and Accountability

AI tools require proper supervision, similar to the oversight necessary for paralegals and other non-lawyer legal professionals. ABA guidance suggests that:

  • Managerial attorneys should establish clear policies for AI use
  • Firms should provide training on the ethical and practical aspects of AI
  • Subordinate attorneys must not use AI in ways that violate their professional obligations, even at the direction of supervisors
  • All AI-generated content requires careful review and verification before use in client matters

AI Output Verification

The phenomenon of AI "hallucinations" (where systems generate false information that appears credible) has created new challenges for attorneys. Recent cases like those mentioned above, where attorneys submitted non-existent judicial opinions generated by AI, highlight the practical importance of these verification responsibilities. The ABA emphasizes that:

  • All AI output must be critically reviewed for accuracy
  • Citations and legal authorities generated by AI require particular scrutiny
  • Attorneys remain responsible for the accuracy of all work product, regardless of whether AI tools were used in their creation

Unlike general AI systems that provide information without clear sourcing, legal AI tools like Transcript Genius have been specifically designed to solve the verification challenge through its linked citation feature. For example, when generating insights from deposition transcripts, Transcript Genius includes clickable page-line citations for every reference it makes. 

So, if Steno’s AI identifies a key admission from a medical expert witness, the attorney can instantly verify this information by clicking the associated citation, which highlights the exact testimony in the original transcript. This built-in verification system eliminates the risk of hallucinated content by creating a clear chain of evidence back to the source material.


State Bar Approaches: Varying Levels of Maturity

A review of statements from various state bar associations shows a range of ways to address AI ethics at different maturity levels. Some state bars have developed comprehensive frameworks with detailed guidance on specific rules of professional conduct, while others have issued preliminary statements or formed exploratory committees. Notably, a large number of state bar associations have not yet issued any formal guidance for law firms.

This diversity of approaches reflects the complex nature of AI regulation: the technology is evolving rapidly, potential risks and benefits are still being discovered, and the legal profession is determining how best to balance innovation with ethical obligations. The absence of guidance in many states also signals the nascent stage of AI ethics development in the legal profession, with many bar associations likely taking a wait-and-see approach.

The spectrum of responses ranges from proactive and detailed (e.g., New York) to exploratory and foundational (e.g., Georgia). Some jurisdictions focus on practical implementation guidance, while others emphasize broader principles or specific risk areas. This varied landscape creates challenges for attorneys practicing across multiple jurisdictions, but also provides valuable perspectives on the many ways the profession can address AI ethics.

New York State Bar Association: Comprehensive AI Guidance

The New York State Bar Association (NYSBA) has developed one of the most robust frameworks for AI ethics. In April 2024, its Task Force on Artificial Intelligence released a comprehensive report with recommendations that provide a thorough analysis of how AI is reshaping legal practice.

New York's approach stands out for addressing the full spectrum of ethical considerations related to AI use. The NYSBA report begins by examining foundational concepts around competence when using AI tools, emphasizing that attorneys must understand not only how to operate these technologies but also their limitations and potential risks.

The Task Force report pays particular attention to confidentiality concerns, recognizing that client data protection becomes more complex when third-party AI systems are involved. The guidance offers practical considerations for evaluating AI platforms' data security practices and determining when client consent might be necessary for certain AI applications.

Rather than simply identifying problems, the NYSBA Task Force's recommendations provide forward-looking solutions for law firms. The report suggests practical measures for attorneys to verify AI-generated content, approaches to supervising junior attorneys and staff using AI tools, and considerations for client communication about AI use.

The New York guidance also addresses emerging questions about appropriate billing practices for AI-assisted work, balancing the efficiency benefits of AI with ethical obligations around reasonable fees. This nuanced approach acknowledges both the legitimate professional work involved in effectively using AI and the need to pass appropriate cost savings on to clients.

What distinguishes the NYSBA approach is its comprehensive nature, addressing everything from day-to-day practice considerations to broader questions about how AI might transform the profession over time. By providing this thorough framework, the New York State Bar Association has created a valuable resource for attorneys navigating AI implementation while upholding their ethical obligations.

California: Practical Implementation Guide

While New York's guidance provides a comprehensive theoretical framework, the State Bar of California's more pragmatic approach focuses on immediate, actionable guidance. for practicing attorneys. The association’s “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” is deliberately structured as a practical roadmap rather than an exhaustive analysis. 

The document organizes recommendations around specific ethical duties that attorneys encounter in day-to-day practice, with particular emphasis on security and confidentiality challenges. Where New York's approach explores broader implications of AI on the profession, California focuses on helping attorneys make sound decisions in the immediate term. 

The California State Bar places special emphasis on the technical aspects of responsible AI implementation. Attorneys are encouraged to consult with IT professionals and cybersecurity experts before integrating AI tools that might process confidential client information. This technical focus reflects California's position at the intersection of technology and law.

California's approach to supervision and accountability is also noteworthy. The guidance provides specific recommendations for law firm management on establishing clear AI usage policies and training protocols. It also addresses the responsibilities of subordinate attorneys when using AI tools, creating a clear chain of accountability that complements the state's existing ethical framework.

Texas: Forward-Looking Advisory Framework

The State Bar of Texas has adopted a distinctive approach to AI ethics through its Taskforce for Responsible AI in the Law (TRAIL). Texas's approach is characterized by its breadth of focus across multiple domains of legal practice. The TRAIL interim report examines AI implications not only for general practice but also for specific practice areas. This specialized analysis acknowledges that AI tools and their ethical implications may vary significantly depending on the context in which they're deployed.

What distinguishes the Texas framework is its emphasis on collaborative development of AI ethics guidelines. Where other states have issued definitive guidance, Texas has positioned its taskforce as a bridge-builder between various stakeholders including practitioners, judges, legal educators, and technology experts. The report explicitly recommends AI summits and ongoing collaborative efforts, reflecting a recognition that AI ethics in legal practice will require continuous evolution.

The Texas approach also stands apart in its attention to access and equity considerations. The TRAIL report specifically addresses the potential for AI to help close the "justice gap" by making legal services more accessible, while also acknowledging the risk that unequal access to AI technology could exacerbate existing disparities. This focus on both the promises and pitfalls of AI for access to justice adds a dimension that is less prominent in other states' guidance.

Texas has also emphasized the importance of education and training, recommending both continuing legal education for practicing attorneys and modifications to law school curricula. This educational focus complements the state's broader advisory framework by recognizing that responsible AI use requires not just rules but also knowledge development throughout the legal community.

Georgia: Committee Formation and Ongoing Development

The State Bar of Georgia represents an important counterpoint to the more developed guidance seen in the states discussed above. Georgia has taken initial steps by forming a Special Committee on Artificial Intelligence and Technology, illustrating how many state bars are in the early stages of addressing AI ethics.

Rather than rushing to issue guidance, the state has prioritized careful evaluation of how existing Rules of Professional Conduct interact with technological developments, including AI. This methodical, deliberate approach reflects a recognition that effective AI ethics guidance must be built on a thorough understanding of both current rules and emerging technologies.

The committee's mandate includes making recommendations to the Supreme Court of Georgia and Board of Governors about whether current rules and policies sufficiently address technology-related actions by attorneys. This approach demonstrates the importance of laying careful groundwork before issuing definitive rules. While attorneys in Georgia may not yet have specific AI ethics guidance, the committee's work signals the bar's recognition of AI's significance and its commitment to developing appropriate guidelines.


State-by-State Guide to Bar Association AI Ethics Guidance

Below is an alphabetical listing of state bar associations and their current guidance on AI ethics in legal practice. This list will be updated as states develop or refine their guidance.

  • Alabama: No formal guidance is available.
  • Alaska: No formal guidance is available.
  • Arizona: No formal guidance is available.
  • Arkansas: A task force has been established.
  • California: Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (See above.)
  • Colorado: Recommendations are being explored.
  • Connecticut: A task force is exploring the issue. 
  • Delaware: No formal guidance is available.
  • District of Columbia: Ethics Opinion 388: Attorneys’ Use of Generative Artificial Intelligence in Client Matters
  • Florida: Bar Board Review Committee on Professional Ethics Opinion 24-1 (2024)
  • Georgia: Special Committee on Artificial Intelligence and Technology formed; guidance in development. (See above.)
  • Hawaii: A committee is exploring recommendations.
  • Idaho: No formal guidance is available.
  • Illinois: A committee has provided initial recommendations.
  • Indiana: No formal guidance is available.
  • Iowa: The state bar association has provided a list of resources on AI.
  • Kansas: No formal guidance is available.
  • Kentucky: Bar Association Ethics Opinion KBA E-457
  • Louisiana: No formal guidance is available.
  • Maine: No formal guidance is available.
  • Maryland: No formal guidance is available. 
  • Massachusetts: No formal guidance is available.
  • Michigan: Artificial Intelligence for Attorneys—Frequently Asked Questions 
  • Minnesota: Working Group on AI Report & Recommendations
  • Mississippi: Ethics Opinion No. 267
  • Missouri: Informal Opinion 2024-11
  • Montana: No formal guidance is available.
  • Nebraska: No formal guidance is available.
  • Nevada: The state bar has formed an advisory group but no formal guidance has been issued.
  • New Hampshire: Ethics of Using Artificial Intelligence in Practice 
  • New Jersey: State Bar Association Task Force on Artificial Intelligence and the Law: Report, Requests, Recommendations, and Findings
  • New Mexico: Formal Ethic Advisory Opinion 2024-11
  • New York: State Bar Association Task Force on Artificial Intelligence Report and Recommendations (See above.)
  • North Carolina: 2024 Formal Ethics Opinion 1: Use of Artificial Intelligence in Law Practice
  • North Dakota: No formal guidance is available.
  • Ohio: No formal guidance is available.
  • Oklahoma: No formal guidance is available.
  • Oregon: Oregon State Bar Bulletin: The AI Issue
  • Pennsylvania: Joint Formal Opinion 2024-200: Ethical Issues Regarding the Use of Artificial Intelligence
  • Rhode Island: No formal guidance is available.
  • South Carolina: No formal guidance is available.
  • South Dakota: No formal guidance is available.
  • Tennessee: A task for is exploring recommendations.
  • Texas: Texas Bar Taskforce (TRAIL) Interim Report on Responsible AI in the Law (See above.)
  • Utah: Using ChatGPT in Our Practices: Ethical Considerations
  • Vermont: No formal guidance is available.
  • Virginia: Guidance on Generative AI
  • Washington: A legal technology task force has been formed.
  • West Virginia: Legal Ethics Opinion 24-01 
  • Wisconsin: No formal guidance is available.
  • Wyoming: No formal guidance is available.


    Sources
  • AI and Attorney Ethics Rules: 50-State Survey, Justia.
  • State Legal Ethics Guidance on Artificial Intelligence (AI), Bloomberg Law.


Joe Stephens, J.D., is a Consulting Attorney specializing in legal AI technology at Steno and a clinical Lecturer at Texas Tech University School of Law. With over 15 years of experience in criminal defense and public service, he founded and led Texas' largest rural public defender office, which serves a 12-county area. A graduate of The University of Texas School of Law (cum laude) and Vanderbilt University (B.A.), Stephens currently serves as a Board Member of the Texas Criminal Defense Lawyer's Association (TCDLA) and sits on multiple State Bar of Texas committees. His expertise spans the intersection of legal practice and technological innovation in the justice system.

AVAILABLE NATIONWIDE

The court reporting you need. The service you deserve.

When running a law firm, you have a lot on your mind: your cases, your clients, your cash flow. You need to meet your deadlines, work up your cases, and generate new business. So the last thing on your mind should be worrying about the details of depositions. Don’t let financing or technical hurdles stand in your way.