What Is Governed AI?

AI is entering law firms whether firms are ready or not. Attorneys are using ChatGPT on their phones. Paralegals are testing AI writing tools on their laptops. Practice management vendors are bolting AI features onto existing products. The question for firms is not whether AI will touch their work, but whether it will do so in a governed way or in a way that creates liability they cannot see.

One-sentence answer: Governed AI is an approach to legal artificial intelligence in which every AI action is subject to attorney oversight, audit trails, approval gates, and data isolation, ensuring that AI amplifies firm capability without creating ungoverned malpractice risk.

Why AI governance matters for law firms

Law firms are not ordinary businesses when it comes to AI adoption. Attorneys operate under ethical obligations that do not apply to other industries. They owe duties of competence, confidentiality, and loyalty to their clients. They are subject to disciplinary rules that govern communication, conflicts of interest, and the unauthorized practice of law. And they bear personal liability for the work product that goes out under their name, regardless of whether a human or a machine produced it.

This means that AI in a law firm is not just a productivity tool. It is a liability surface. An AI that generates a legal letter with a hallucinated case citation creates malpractice exposure for the signing attorney. An AI that accesses client data without proper isolation creates a confidentiality breach. An AI that sends a communication without attorney review potentially violates professional conduct rules. These are not hypothetical risks. Since 2023, multiple courts have sanctioned attorneys for filing AI-generated briefs containing fabricated citations, including Mata v. Avianca in the Southern District of New York.

Governed AI is the response to this reality. It is not about avoiding AI. It is about deploying AI within a framework that preserves the ethical and operational constraints that make legal practice different from other knowledge work.

The three pillars of governed AI

Governed AI in legal practice rests on three structural commitments that distinguish it from generic AI deployment. Each addresses a specific category of risk that ungoverned AI creates for law firms.

  • Approval gates: Consequential actions, including sending communications, modifying case records, generating client-facing documents, and taking steps that affect legal strategy, require attorney review and approval before execution. The AI prepares work. The attorney decides whether it goes forward.
  • Audit trails: Every AI-generated suggestion, draft, analysis, and action is logged with full provenance. The firm can reconstruct what the AI did, when it did it, what data it used, and who approved the result. This is not just good practice. It is a prerequisite for defending the firm's work product if challenged.
  • Data isolation: Client information is strictly segregated. No client's data is used to train models, improve suggestions for other clients, or become accessible across firm boundaries. Multi-tenant data isolation ensures that confidentiality obligations are met at the infrastructure level, not just the policy level.

What ungoverned AI looks like in practice

Ungoverned AI in a law firm is not usually dramatic. It looks like an attorney pasting a client's contract into ChatGPT for analysis, not realizing the data may be retained by the provider. It looks like a paralegal using an AI writing tool to draft a demand letter that references a case that does not exist. It looks like a practice management system's new AI feature sending automated follow-up emails to clients without attorney review of the content.

The common thread is the absence of structural controls. The AI acts, and the firm hopes the output is correct, confidential, and appropriate. When it is not, the liability falls on the attorney whose name is on the work, and the firm discovers the problem only when a client complains, a judge flags an issue, or a bar grievance arrives.

The ABA's Formal Opinion 512 (2024) makes clear that attorneys must ensure competent use of AI technology, including understanding its limitations, supervising its outputs, and maintaining client confidentiality. This is not optional guidance. It is a professional obligation, and firms that adopt AI without governance structures are operating in violation of it.

How DONNA implements governed AI

DONNA, Intakit's intelligence layer, is built from the ground up around governed AI principles. This is not a feature that was added to an existing product. It is a design philosophy that shapes how every AI capability in the platform operates.

When DONNA drafts a communication, it presents the draft for attorney review rather than sending it directly. When DONNA surfaces a recommendation, the recommendation includes the reasoning and data behind it so the attorney can evaluate it. When DONNA accesses matter data, it does so within strict tenant isolation boundaries that prevent any cross-client data leakage. And every action DONNA takes is logged in an audit trail that the firm can review, export, and rely on if its work product is ever challenged.

The result is that attorneys get the leverage of AI, including faster drafting, proactive risk surfacing, matter context synthesis, and operational visibility, without giving up the control and oversight that their professional obligations require. DONNA amplifies attorney capability without creating ungoverned risk. With Intakit — always prepared, always ready.

Frequently Asked Questions

Why is ungoverned AI dangerous for law firms?
Law firms operate under ethical obligations that make ungoverned AI uniquely risky. Attorneys owe duties of competence, confidentiality, and loyalty. An AI that hallucinates citations, leaks client data, or takes actions without review can create malpractice liability, disciplinary exposure, and client harm. Since 2023, courts have sanctioned attorneys for filing AI-generated work without proper oversight.
What are the ethical obligations for attorneys using AI?
The ABA's Formal Opinion 512 requires attorneys to use AI competently, which includes understanding its limitations, supervising its outputs, and ensuring client confidentiality. Multiple state bar associations have issued similar guidance. Attorneys are personally responsible for AI-generated work product that goes out under their name, regardless of whether the AI produced it correctly.
How is governed AI different from not using AI at all?
Governed AI is not about avoiding AI. It is about deploying AI within a framework that matches the legal profession's requirements: approval gates for consequential actions, audit trails for all AI-generated work, and data isolation for client confidentiality. Firms that avoid AI entirely miss significant operational benefits. Firms that adopt AI without governance create invisible risk. Governed AI is the middle path that delivers leverage with safety.
Can a generic AI tool like ChatGPT be used safely at a law firm?
Generic AI tools lack the structural controls that legal practice requires. They typically do not provide approval gates for actions, audit trails for generated content, or guaranteed data isolation between clients. They may retain or train on data submitted by users. While individual attorneys may use generic tools for low-risk research, firm-wide deployment of AI for client work requires governance infrastructure that general-purpose tools do not provide.
What should a firm look for in a governed AI product?
Three non-negotiable elements: approval gates that prevent AI from taking consequential actions without attorney review, audit trails that log every AI-generated suggestion with full provenance, and multi-tenant data isolation that prevents any cross-client information leakage at the infrastructure level. Beyond these, look for explainability in AI outputs, attorney-configurable boundaries for what AI can and cannot do, and a vendor that can clearly articulate their governance architecture.

Related Pages