The AI Gavel: An Attorney’s Guide to Ethically Using LLMs
Yes, lawyers can ethically use artificial intelligence, but it demands a vigilant and informed approach. The use of Large Language Models (LLMs) like ChatGPT doesn’t change your fundamental ethical duties, but it does create new pathways to potentially violate them. The core challenge is balancing the efficiency AI offers with the non-negotiable duties of confidentiality, competence, and supervision.
Successfully integrating AI into your practice means treating it as a sophisticated, but fallible, assistant—not as a co-counsel (unless you are brainstormin, when you can think of it as one of the smartest litigators you know). You must understand the technology’s privacy settings, rigorously protect client data, verify all outputs, and always apply your own professional judgment.
🏛️ Core Ethical Duties Implicated by AI
Your ethical obligations remain the same, but AI tools test them in novel ways. Here’s how the most critical duties apply in the age of generative AI.
Duty of Confidentiality
This is the most significant ethical hurdle. When you input information into an AI platform, you are sharing it with a third party, which has serious implications for attorney-client privilege.
- The Default Risk with Free Tools Many consumer-grade, free AI tools explicitly state in their terms of service that they use your inputs to train their models. Any confidential client information you enter could become part of the AI’s “knowledge base,” potentially surfacing in responses to other users. This is a clear breach of confidentiality.
- The Solution: Privacy Controls in Paid Tiers You do not necessarily need a top-tier enterprise account to protect client data, but you must use an account with robust privacy controls. The key is to verify the platform’s data policies:
- Paid Consumer Accounts (e.g., ChatGPT Plus): Many paid subscriptions allow you to go into your settings and disable chat history and opt out of having your conversations used for model training. This is a critical step. Some also offer automatic chat deletion.
- Enterprise-Level Accounts: These solutions offer the highest level of protection, often including contractual guarantees (like a Business Associate Agreement), “zero data retention” policies, and administrative controls for a whole firm.
Before using any paid tier, read the terms of service to confirm your data is not being used in a way that violates your duty of confidentiality.
Duty of Competence
The duty of competence requires you to understand the technology you use. With AI, this means recognizing its significant limitations.
- ### The Risk of “Hallucinations” and Inaccuracies LLMs are designed to generate plausible-sounding text, not to state factual truth. They sometimes invent legal citations, misstate case holdings, and fabricate facts. Lawyers have been sanctioned for submitting court filings containing entirely fictional case law generated by AI. However the major platofrms such as Gemini, Chat GPT, and others, are dilligently wroking on improvements in this area.
- The Mandate to Verify Never trust AI output without independent verification. Every case citation, statutory reference, and factual assertion must be checked using reliable legal research tools. Think of the AI’s output as a first draft written by a very smart, but sometimes unreliable, intern. Your job is to apply your legal expertise to correct, refine, and validate it.
Duty of Supervision
Model Rule 5.3 requires lawyers to supervise the work of non-lawyers. It’s best to think of an LLM as a non-lawyer assistant.
- Over-Reliance and Abdication of Judgment You are ultimately responsible for every document that leaves your office. You cannot blame the AI if a brief is filled with errors or a legal strategy is based on a flawed premise.
- Maintaining Final Authority Use AI for specific, supervised tasks like brainstorming arguments, summarizing depositions, or drafting routine correspondence. The final work product must be the result of your professional skill. You must review, edit, and approve all AI-generated content.
✅ Practical Tips for Ethically Using LLMs
Here are actionable steps you can take to use AI tools while safeguarding client information and your professional standing.
- Choose the Right Account and Settings. Before anything else, ensure you are using a paid account where you can confirm in the settings that your chat history is disabled and your data will not be used for training. For firms, an enterprise-level solution is often the safest bet.
- Sanitize Your Inputs. The most foolproof method for protecting confidentiality is to never input confidential information. Anonymize all data as much as possible by removing all personally identifiable information.
- Develop a Firm-Wide AI Policy. Establish clear written guidelines for all attorneys and staff. The policy should specify which AI tools and account levels are approved and mandate the use of privacy settings.
- Stay Informed on Evolving Rules. Bar associations are actively providing guidance on AI. Pay attention to ethics opinions from your jurisdiction, as the technology and the rules governing it are evolving rapidly.
🗣️ The Disclosure Dilemma: Informing Your Client
Currently, there is no explicit ethical rule requiring you to disclose your use of AI as a drafting or research tool to a client, just as you don’t disclose your use of Westlaw.
- illing, Transparency, and Value The issue becomes more complex with billing. If AI helps you complete a task in 30 minutes that would have previously taken five hours, you cannot ethically bill the client for five hours. Your billing must reflect the actual time and effort spent. For client relations, transparency can be a benefit. Framing your AI use as a way to provide more cost-effective service shows you are leveraging modern tools for their benefit. You can set flat rates for variou tasks, as long as they are reasonable.
📜 A New Duty?: Disclosing AI Use to the Court
This is a critical and rapidly developing area. While not yet universal, some courts are now requiring attorneys to disclose whether they used generative AI in the preparation of court filings.
- Why Courts Are Requiring Disclosure Judges are implementing these rules primarily to combat the submission of briefs containing “hallucinated,” non-existent case law. By requiring a certification, courts are forcing attorneys to explicitly take responsibility for the accuracy and validity of everything in their filings, reinforcing the principle that the lawyer, not the AI, is accountable.
- Checking Local Rules and Standing Orders The obligation to disclose is jurisdiction-specific. The U.S. Court of Appeals for the 5th Circuit, along with several federal district courts, has issued standing orders or amended local rules on this topic. Before filing any document, you must check the local rules for the specific court and any individual standing orders from the judge presiding over your case. Failure to comply could lead to sanctions. Assume that this will become a more common requirement over time. Keep in mind that ai written text can be detected by sophisticated technology.