What "AI in the courtroom" looks like in practice
The headline cases get the attention. The everyday reality of AI in legal practice in late 2025 is more mundane and more interesting, discovery review, transcript synthesis, brief assistance. The shape of what's working is worth understanding.
The headline AI-in-law cases get the attention, the lawyer who cited fake cases generated by an LLM, the bar association rulings on AI use, the high-profile firms announcing AI partnerships. The everyday reality of AI in legal practice in late 2025 is more mundane, more pervasive, and more interesting than the headline cases suggest. Worth being concrete about what's actually working in practice, because the gap between the headline narrative and the practitioner reality is wide.
I'm not a lawyer. The picture below comes from conversations with practitioners building AI workflows in legal organizations through 2025, solo practitioners, mid-sized firms, in-house counsel teams. The pattern is consistent enough across the conversations that it's worth synthesizing.
What's actually working
Five categories of AI use in legal practice that have moved from experimental to routine in 2025:
Discovery review. Document review at scale. AI-assisted relevance categorization, privilege detection, key-term identification. The pattern was emerging in 2023; in 2025 it's standard. The mature workflows have humans reviewing AI's first pass rather than reviewing documents directly. Throughput on document review has improved meaningfully without sacrificing quality.
Transcript synthesis. Depositions, hearings, recorded interviews. AI generates the structured summary, identifies key passages, links them to legal issues. The lawyer reads the synthesis and the marked passages rather than the full transcript. Time savings are real; the synthesis quality has improved enough to be relied on.
Brief assistance. Drafting first-pass briefs, organizing arguments, checking citations against the actual cited material. The output is a draft the lawyer revises, not a final product. The "cite-checking" function specifically (verifying that the cases cited say what the brief claims they say) has been a quiet productivity gain.
Contract review. Standard-issue contracts with known patterns. AI flags deviations from a baseline, identifies missing provisions, suggests language. Lawyers focus their attention on the non-standard cases that need real review. The pattern works best with contract types the firm sees repeatedly.
Client-intake triage. First-pass categorization of new matters, fee estimation, conflict checking. AI handles the routine cases through to assignment; humans handle the ambiguous ones. Particularly valuable for firms with high-volume practices.
These are the durable use cases. Most are extensions of pattern-matching work that humans were doing slowly; AI does them faster with appropriate oversight.
What isn't working
A few cases where the AI-in-law pitch is more persistent than the practitioner outcomes justify:
Court appearances. No serious practitioner is having AI argue in court. The cases that suggest otherwise are outliers, generally with poor outcomes. The advocacy function is firmly human.
Strategic legal judgment. Deciding what cases to take, what theory to pursue, what settlement to accept. The judgment-heavy work where the value of being right is highest is where AI helps least.
Novel-question research. When the legal question has no clear answer, the AI's tendency to produce confident-sounding answers is dangerous. The cases where AI hallucinated citations are typically novel-question research where the model invented authority that didn't exist.
Client relationships. The trust-and-relationship work that's much of legal practice is human work. AI can prepare materials and handle routine communications; the relational core stays human.
Anything where the cost of being wrong is high and the verification of correctness is hard. This includes a lot of expert witness work, complex regulatory questions, and high-stakes negotiations.
These are real limits. The mature legal-AI conversations acknowledge them; the marketing layer often doesn't.
What the discipline that works looks like
The legal practitioners who've successfully integrated AI share a few specific practices:
Verification cadence. Every AI output that informs an action gets verified against primary sources before the action happens. The "trust the AI summary" pattern is the one that produces the headline embarrassments.
Workflow scoping. AI is used for specific bounded workflows, not as a general assistant. The lawyer using AI for discovery review has a defined process; the lawyer using AI for "whatever I'm working on" has the failure modes.
Privacy-bound architecture. Client-confidential information stays within tightly controlled AI surfaces. The hosted-vendor pattern that's common in other industries is harder to make work in legal contexts; the local-LLM patterns I've been writing about are increasingly the right architecture.
Audit and logging. Every AI interaction with client matters is logged. The discipline is the same as the engineering discipline I wrote about for tool calls, applied to legal work. The audit story matters when bar associations or courts ask.
Human-in-the-loop on consequential decisions. Anything affecting client outcomes (filing decisions, advice given, communication sent) has explicit human approval. Same human-in-the-loop pattern as production agentic systems.
Explicit disclosure. Where required by jurisdiction or client agreement, AI use is disclosed. The "we used AI" disclosure is becoming routine in some contexts and rare in others; the practitioners doing it well are calibrated to their specific obligations.
These aren't exotic. They're the normal professional-responsibility patterns applied to a new tool class.
The governance framework that's emerging
Bar associations and courts are settling on a framework that looks like this:
AI use is generally allowed. The reflexive prohibition that some early opinions suggested has mostly given way to "use it carefully."
Lawyer accountability is unchanged. Whatever the AI did, the lawyer is responsible. The "the AI made me cite a fake case" defense doesn't work.
Some disclosure obligations exist. Specifics vary by jurisdiction. The trend is toward more transparency rather than less.
Confidentiality obligations carry forward. Client information sent to a hosted AI service implicates duty of confidentiality. The architectures that minimize this exposure (local LLMs, on-prem inference, redaction-first patterns) are increasingly the right defaults for client matters.
Competence obligations now include AI. Lawyers are expected to understand the tools they use well enough to use them responsibly. Bar CLE programs are catching up; some are not.
The framework is still settling. The direction is clear. The lawyers operating ahead of the formal framework rather than waiting for it to catch up are the ones less likely to be caught when the rules formalize.
The connection to the broader patterns
The legal-AI conversation maps onto patterns I've been writing about elsewhere:
- The governance framework that doesn't make engineers quit is the same shape as the AI-use guidelines that work in legal practices.
- The knowledge-as-asset framing applies directly, the firm's accumulated case knowledge is an asset; how it interacts with AI tools is part of the long-run value question.
- The agent design patterns (planner-executor, human-in-the-loop, bounded autonomy) are the patterns that work for legal AI as much as for engineering AI.
Same patterns, different domain. The practitioners who recognize the pattern reuse work; the ones who don't reinvent it.
What I'd recommend
For legal practitioners or in-house counsel teams getting more serious about AI integration in late 2025:
- Start with discovery review or transcript synthesis. Highest ROI, most-mature tooling, lowest risk of the headline embarrassments.
- Build the verification habit. Every AI output verified against primary sources. The discipline is what separates the success cases from the cautionary tales.
- Address confidentiality architecturally. Don't rely on vendor TOS to protect client information; design systems where the information doesn't leave the firm's perimeter for the cases that matter.
- Log everything. The audit and accountability story depends on this; build it before you need it.
- Watch the bar developments. The framework is forming; align early to where it's clearly heading.
AI in legal practice in late 2025 is the routine reality. The headline cases get the attention; the actual work is happening in discovery rooms, transcript reviews, and contract markups. The practitioners doing it well are quiet about it; the headline cases are the exceptions.
Worth knowing the difference.